Feb 02 00:10:00 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 02 00:10:01 crc kubenswrapper[5108]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.217277 5108 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234038 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234094 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234105 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234114 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234123 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234132 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234141 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234154 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234163 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234171 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234179 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234188 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234197 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234205 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234213 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234221 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234256 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234265 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234273 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234282 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234290 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234297 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234306 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234315 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234323 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234331 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234341 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234350 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234358 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234367 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234375 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234383 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234394 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234402 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234410 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234419 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234427 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234435 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234444 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234452 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234459 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234467 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234475 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234483 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234495 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234508 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234517 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234526 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234534 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234542 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234550 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234558 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234565 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234574 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234583 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234591 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234599 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234607 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234615 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234623 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234631 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234639 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234646 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234655 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234664 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234672 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234680 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234688 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234696 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234704 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234712 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234725 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234737 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234747 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234755 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234763 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234770 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234778 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234803 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234812 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234820 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234827 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234836 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234847 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234857 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.234868 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.235986 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236002 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236010 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236018 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236027 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236035 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236044 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236052 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236060 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236069 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236076 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236084 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236092 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236100 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236109 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236120 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236129 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236138 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236147 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236156 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236164 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236172 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236180 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236188 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236209 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236219 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236254 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236262 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236270 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236279 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236288 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236295 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236303 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236312 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236320 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236328 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236338 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236345 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236353 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236360 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236369 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236377 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236384 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236394 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236403 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236484 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236493 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236502 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236511 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236522 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236575 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236584 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236592 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236600 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236611 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236622 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236630 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236655 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236663 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236671 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236679 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236687 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236694 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236703 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236711 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236725 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236733 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236741 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236750 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236760 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236770 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236779 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236790 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236799 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236811 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236819 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236827 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236835 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236843 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236850 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236858 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236865 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236873 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236881 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236889 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.236897 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238563 5108 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238597 5108 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238629 5108 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238641 5108 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238668 5108 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238680 5108 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238694 5108 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238710 5108 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238722 5108 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238733 5108 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238744 5108 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238755 5108 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238768 5108 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238779 5108 flags.go:64] FLAG: --cgroup-root="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238790 5108 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238801 5108 flags.go:64] FLAG: --client-ca-file="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238813 5108 flags.go:64] FLAG: --cloud-config="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238824 5108 flags.go:64] FLAG: --cloud-provider="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238835 5108 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238855 5108 flags.go:64] FLAG: --cluster-domain="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238864 5108 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238873 5108 flags.go:64] FLAG: --config-dir="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238882 5108 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238892 5108 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238903 5108 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238917 5108 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238927 5108 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238937 5108 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238947 5108 flags.go:64] FLAG: --contention-profiling="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238956 5108 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238964 5108 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238973 5108 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238982 5108 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.238994 5108 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239003 5108 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239011 5108 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239020 5108 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239051 5108 flags.go:64] FLAG: --enable-server="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239061 5108 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239079 5108 flags.go:64] FLAG: --event-burst="100" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239088 5108 flags.go:64] FLAG: --event-qps="50" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239097 5108 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239106 5108 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239115 5108 flags.go:64] FLAG: --eviction-hard="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239125 5108 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239134 5108 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239143 5108 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239152 5108 flags.go:64] FLAG: --eviction-soft="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239161 5108 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239173 5108 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239184 5108 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239195 5108 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239206 5108 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239218 5108 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239274 5108 flags.go:64] FLAG: --feature-gates="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239303 5108 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239312 5108 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239323 5108 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239332 5108 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239342 5108 flags.go:64] FLAG: --healthz-port="10248" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239351 5108 flags.go:64] FLAG: --help="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239360 5108 flags.go:64] FLAG: --hostname-override="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239368 5108 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239377 5108 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239386 5108 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239395 5108 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239403 5108 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239412 5108 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239420 5108 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239429 5108 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239457 5108 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239466 5108 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239476 5108 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239484 5108 flags.go:64] FLAG: --kube-reserved="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239493 5108 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239502 5108 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239511 5108 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239519 5108 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239584 5108 flags.go:64] FLAG: --lock-file="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239595 5108 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239604 5108 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239613 5108 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239628 5108 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239637 5108 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239646 5108 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239655 5108 flags.go:64] FLAG: --logging-format="text" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239664 5108 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239674 5108 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239682 5108 flags.go:64] FLAG: --manifest-url="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239692 5108 flags.go:64] FLAG: --manifest-url-header="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239704 5108 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239713 5108 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239725 5108 flags.go:64] FLAG: --max-pods="110" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239734 5108 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239743 5108 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239752 5108 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239761 5108 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239770 5108 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239778 5108 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239787 5108 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239809 5108 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239818 5108 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239827 5108 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239887 5108 flags.go:64] FLAG: --pod-cidr="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239897 5108 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239911 5108 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239919 5108 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239928 5108 flags.go:64] FLAG: --pods-per-core="0" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239937 5108 flags.go:64] FLAG: --port="10250" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239946 5108 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239955 5108 flags.go:64] FLAG: --provider-id="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239964 5108 flags.go:64] FLAG: --qos-reserved="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239972 5108 flags.go:64] FLAG: --read-only-port="10255" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239981 5108 flags.go:64] FLAG: --register-node="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239990 5108 flags.go:64] FLAG: --register-schedulable="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.239999 5108 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240014 5108 flags.go:64] FLAG: --registry-burst="10" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240023 5108 flags.go:64] FLAG: --registry-qps="5" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240031 5108 flags.go:64] FLAG: --reserved-cpus="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240040 5108 flags.go:64] FLAG: --reserved-memory="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240050 5108 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240060 5108 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240069 5108 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240078 5108 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240086 5108 flags.go:64] FLAG: --runonce="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240095 5108 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240104 5108 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240113 5108 flags.go:64] FLAG: --seccomp-default="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240121 5108 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240130 5108 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240139 5108 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240148 5108 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240157 5108 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240165 5108 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240174 5108 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240182 5108 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240205 5108 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240214 5108 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240223 5108 flags.go:64] FLAG: --system-cgroups="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240273 5108 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240288 5108 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240297 5108 flags.go:64] FLAG: --tls-cert-file="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240306 5108 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240324 5108 flags.go:64] FLAG: --tls-min-version="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240333 5108 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240341 5108 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240350 5108 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240359 5108 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240368 5108 flags.go:64] FLAG: --v="2" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240379 5108 flags.go:64] FLAG: --version="false" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240391 5108 flags.go:64] FLAG: --vmodule="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240403 5108 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.240413 5108 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240661 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240676 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240685 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240693 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240702 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240710 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240721 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240732 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240742 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240750 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240759 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240767 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240774 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240782 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240790 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240798 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240825 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240833 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240841 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240849 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240857 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240866 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240874 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240883 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240893 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240903 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240914 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240924 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240938 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240950 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240959 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240968 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240977 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240990 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.240998 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241007 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241016 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241025 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241033 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241042 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241050 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241058 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241066 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241075 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241083 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241090 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241098 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241106 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241114 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241136 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241144 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241152 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241160 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241168 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241176 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241184 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241191 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241200 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241207 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241215 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241223 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241259 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241270 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241278 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241286 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241295 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241303 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241311 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241319 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241327 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241335 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241343 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241352 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241359 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241367 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241375 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241383 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241464 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241473 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241481 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241488 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241496 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241515 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241524 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241532 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.241539 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.241565 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.259589 5108 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.260129 5108 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260211 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260220 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260245 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260253 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260260 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260266 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260272 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260278 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260283 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260289 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260294 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260299 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260304 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260308 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260313 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260318 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260322 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260327 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260332 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260338 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260344 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260350 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260356 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260362 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260368 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260374 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260381 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260389 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260395 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260400 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260405 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260415 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260421 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260427 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260431 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260437 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260442 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260446 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260451 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260456 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260460 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260465 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260470 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260474 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260479 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260484 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260489 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260493 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260498 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260502 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260507 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260512 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260517 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260529 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260534 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260539 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260544 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260548 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260553 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260557 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260562 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260566 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260571 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260575 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260581 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260585 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260590 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260594 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260599 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260603 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260608 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260612 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260617 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260621 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260626 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260631 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260635 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260640 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260645 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260650 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260654 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260660 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260666 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260671 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260675 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260686 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.260695 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260840 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260848 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260853 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260858 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260863 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260868 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260873 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260877 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260882 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260887 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260894 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260900 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260906 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260911 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260916 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260921 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260926 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260931 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260935 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260940 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260945 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260951 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260956 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260962 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260967 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260972 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260977 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260981 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260986 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260990 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.260996 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261002 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261007 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261012 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261016 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261021 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261026 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261030 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261035 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261039 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261044 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261048 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261053 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261058 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261063 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261067 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261072 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261076 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261081 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261085 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261090 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261094 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261099 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261104 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261109 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261114 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261118 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261123 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261127 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261132 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261160 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261166 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261172 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261177 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261183 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261188 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261193 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261198 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261203 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261208 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261213 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261217 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261242 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261248 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261254 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261269 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261276 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261282 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261287 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261293 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261298 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261304 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261310 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261315 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261321 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 02 00:10:01 crc kubenswrapper[5108]: W0202 00:10:01.261327 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.261337 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.262315 5108 server.go:962] "Client rotation is on, will bootstrap in background" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.266337 5108 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.272282 5108 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.272484 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.273802 5108 server.go:1019] "Starting client certificate rotation" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.274066 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.274264 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.306007 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.308176 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.309719 5108 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.325342 5108 log.go:25] "Validated CRI v1 runtime API" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.378416 5108 log.go:25] "Validated CRI v1 image API" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.381093 5108 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.389725 5108 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-02-00-03-43-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.389769 5108 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.410667 5108 manager.go:217] Machine: {Timestamp:2026-02-02 00:10:01.407774215 +0000 UTC m=+0.683271185 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:e7aab70d-ffc3-4723-87e3-99e45b63c1a4 BootID:e3a7b5ac-876b-4877-b87d-9cb708308d6e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:57:3e:8e Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:57:3e:8e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:42:3d:c9 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:51:db:fc Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:dd:79:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2e:da:f7 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:a3:f2:79:b4:64 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:fc:72:56:3b:03 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.411612 5108 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.411815 5108 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.413855 5108 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.413897 5108 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.414176 5108 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.414189 5108 container_manager_linux.go:306] "Creating device plugin manager" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.414215 5108 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.415295 5108 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.416348 5108 state_mem.go:36] "Initialized new in-memory state store" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.416537 5108 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.420404 5108 kubelet.go:491] "Attempting to sync node with API server" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.420432 5108 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.420450 5108 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.420465 5108 kubelet.go:397] "Adding apiserver pod source" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.420484 5108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.424761 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.424783 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.425484 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.425535 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.429041 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.429068 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.436739 5108 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.437205 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.438127 5108 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439430 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439554 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439639 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439712 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439784 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439849 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.439912 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.440004 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.440076 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.440156 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.440254 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.440879 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.442018 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.442117 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.443542 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.466973 5108 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.467045 5108 server.go:1295] "Started kubelet" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.467279 5108 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.467406 5108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.467619 5108 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.468589 5108 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 00:10:01 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.470430 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.471242 5108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.472016 5108 server.go:317] "Adding debug handlers to kubelet server" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.472040 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.472326 5108 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.472348 5108 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.472473 5108 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.472549 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.474365 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.474617 5108 factory.go:55] Registering systemd factory Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.474658 5108 factory.go:223] Registration of the systemd container factory successfully Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.477622 5108 factory.go:153] Registering CRI-O factory Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.477678 5108 factory.go:223] Registration of the crio container factory successfully Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.477782 5108 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.477821 5108 factory.go:103] Registering Raw factory Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.477843 5108 manager.go:1196] Started watching for new ooms in manager Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.475519 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18904570221403ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,LastTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.478650 5108 manager.go:319] Starting recovery of all containers Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.518113 5108 manager.go:324] Recovery completed Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529409 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529468 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529490 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529502 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529515 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529528 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529540 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529550 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529562 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529573 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529584 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529597 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529606 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529619 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529631 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529640 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529690 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529700 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529711 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529728 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529741 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529753 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529766 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529777 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529808 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529821 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529832 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529843 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529857 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529870 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529881 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529893 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529910 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529921 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529932 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529944 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529958 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529970 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529983 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.529995 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530008 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530020 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530033 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530045 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530058 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530070 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530082 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530095 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530110 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530121 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530133 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530147 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530159 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530171 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530184 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530195 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530247 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530262 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530274 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530285 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530297 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530309 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530321 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530333 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530344 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530356 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530368 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530379 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530394 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530405 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530417 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530429 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530442 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530454 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530499 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530512 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530524 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530786 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530800 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530810 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530821 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530831 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530840 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530853 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530863 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530874 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530890 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530900 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530911 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530921 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530930 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530941 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530951 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530963 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530972 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530982 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.530992 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531002 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531013 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531024 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531035 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531045 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531056 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531067 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531077 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531091 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531101 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531112 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531121 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531135 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531145 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531158 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531184 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531194 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531206 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531239 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531254 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531265 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531275 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531285 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531295 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531304 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531314 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531324 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531335 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531347 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531357 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531379 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531390 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531400 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531410 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531421 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531432 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531442 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531452 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531462 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531472 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531481 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531491 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531501 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531511 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531522 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531532 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531542 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531552 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531562 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531571 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531582 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531592 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531601 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531610 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531620 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531630 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531639 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531649 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531659 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531670 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531682 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531694 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531704 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531714 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531724 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531734 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531747 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531757 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531767 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531777 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531787 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531797 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531809 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531818 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531828 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531839 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531851 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.531861 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539126 5108 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539172 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539194 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539212 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539253 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539274 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539291 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539311 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539327 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539343 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539359 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539375 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539390 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539406 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539423 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539439 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539457 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539474 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539490 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539506 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539522 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539538 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539552 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539569 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539585 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539600 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539619 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539635 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539652 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539668 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539686 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539705 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539723 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539740 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539757 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539773 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539791 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539809 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539825 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539841 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539857 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539874 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539910 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539928 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539944 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539962 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539980 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.539998 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540045 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540063 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540117 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540135 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540151 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540167 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540186 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540206 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540223 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540264 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540279 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540295 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540310 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540326 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540343 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540373 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540392 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540408 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540424 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540472 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540490 5108 reconstruct.go:97] "Volume reconstruction finished" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.540502 5108 reconciler.go:26] "Reconciler: start to sync state" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.543479 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.546623 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.546671 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.546703 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.549412 5108 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.549437 5108 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.549464 5108 state_mem.go:36] "Initialized new in-memory state store" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.551844 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555646 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555713 5108 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555753 5108 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555761 5108 policy_none.go:49] "None policy: Start" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555793 5108 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555833 5108 state_mem.go:35] "Initializing new in-memory state store" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.555765 5108 kubelet.go:2451] "Starting kubelet main sync loop" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.556320 5108 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.558088 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.574761 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.601797 5108 manager.go:341] "Starting Device Plugin manager" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.601850 5108 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.601866 5108 server.go:85] "Starting device plugin registration server" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.602504 5108 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.602531 5108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.602961 5108 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.603054 5108 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.603069 5108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.609045 5108 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.609120 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.656804 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.657078 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.658115 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.658178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.658196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.659156 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.659535 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.659618 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660049 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660116 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660130 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660594 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660658 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.660937 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661138 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661209 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661379 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661419 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661435 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661956 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.661982 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.662334 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.662524 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.662592 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.662952 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663018 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663133 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663166 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663179 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.663964 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.664243 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.664288 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.664559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.664594 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.664609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.665313 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.665351 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.665362 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.665478 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.665511 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.666008 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.666038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.666052 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.673430 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.697579 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.702720 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.704007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.704047 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.704060 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.704087 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.704501 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.706593 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743122 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743183 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743252 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743278 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743309 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.743959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744113 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744166 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744195 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744321 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744363 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744396 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744420 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744434 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744570 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744601 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744681 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744710 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744736 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.744762 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.745175 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.745632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.745992 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.746162 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.746440 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.746662 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.748871 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.761306 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.764321 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846214 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846359 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846376 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846397 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846481 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846519 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846559 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846603 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846609 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846561 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846631 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846687 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846713 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846777 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846793 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846805 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846853 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846952 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846987 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846960 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.847029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.847040 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.846926 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.847115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.847218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.905249 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.906836 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.906949 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.907003 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.907047 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: E0202 00:10:01.908092 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 02 00:10:01 crc kubenswrapper[5108]: I0202 00:10:01.998813 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.008304 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.049628 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:02 crc kubenswrapper[5108]: W0202 00:10:02.056731 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-4265f3d44b591bb8b0e289f11848b2db7c42b3f9bba0b32400e01c0051ab95c3 WatchSource:0}: Error finding container 4265f3d44b591bb8b0e289f11848b2db7c42b3f9bba0b32400e01c0051ab95c3: Status 404 returned error can't find the container with id 4265f3d44b591bb8b0e289f11848b2db7c42b3f9bba0b32400e01c0051ab95c3 Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.061781 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.065115 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.072940 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.074614 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Feb 02 00:10:02 crc kubenswrapper[5108]: W0202 00:10:02.094617 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-a5335a653873250d0706241bf58cb80088617194cc77c6f2586420efd182ae16 WatchSource:0}: Error finding container a5335a653873250d0706241bf58cb80088617194cc77c6f2586420efd182ae16: Status 404 returned error can't find the container with id a5335a653873250d0706241bf58cb80088617194cc77c6f2586420efd182ae16 Feb 02 00:10:02 crc kubenswrapper[5108]: W0202 00:10:02.102014 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-f16859b0b6ab13803fda4ced906666df77411ea4b8e519bb5cebf15804c7cd90 WatchSource:0}: Error finding container f16859b0b6ab13803fda4ced906666df77411ea4b8e519bb5cebf15804c7cd90: Status 404 returned error can't find the container with id f16859b0b6ab13803fda4ced906666df77411ea4b8e519bb5cebf15804c7cd90 Feb 02 00:10:02 crc kubenswrapper[5108]: W0202 00:10:02.110305 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-91fcdcd1a619eb208610cad7016b58662fd2b7185870ac215d36e5344d701012 WatchSource:0}: Error finding container 91fcdcd1a619eb208610cad7016b58662fd2b7185870ac215d36e5344d701012: Status 404 returned error can't find the container with id 91fcdcd1a619eb208610cad7016b58662fd2b7185870ac215d36e5344d701012 Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.151190 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18904570221403ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,LastTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.308725 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.309861 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.309892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.309902 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.309922 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.310567 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.444730 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.537609 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.561551 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"bd0a7924df6785b438d5a84b2a5644aefa953f3725713e91675112abad1620fb"} Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.562908 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"91fcdcd1a619eb208610cad7016b58662fd2b7185870ac215d36e5344d701012"} Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.564094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f16859b0b6ab13803fda4ced906666df77411ea4b8e519bb5cebf15804c7cd90"} Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.566634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a5335a653873250d0706241bf58cb80088617194cc77c6f2586420efd182ae16"} Feb 02 00:10:02 crc kubenswrapper[5108]: I0202 00:10:02.568146 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4265f3d44b591bb8b0e289f11848b2db7c42b3f9bba0b32400e01c0051ab95c3"} Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.581607 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.649147 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.671505 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:02 crc kubenswrapper[5108]: E0202 00:10:02.897356 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.110793 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.112346 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.112479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.112516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.112565 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.113627 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.436201 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.437788 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.444837 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.584555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.584622 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.587914 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" exitCode=0 Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.588542 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.588748 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.589957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.590008 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.590027 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.590706 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.591131 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7" exitCode=0 Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.591197 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.591485 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.592748 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.592802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.592822 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.593107 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.593454 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594414 5108 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a" exitCode=0 Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594511 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594581 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594618 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.594655 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.599116 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.600592 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.600772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.601856 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.602290 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2" exitCode=0 Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.602517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2"} Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.603981 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.607821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.607867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:03 crc kubenswrapper[5108]: I0202 00:10:03.607889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:03 crc kubenswrapper[5108]: E0202 00:10:03.608183 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.316023 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.444712 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.497999 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.607065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.607103 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.611363 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.611386 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.611568 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.612682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.612714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.612734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.616236 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.621994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.622025 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.624262 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509" exitCode=0 Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.624309 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.626482 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d"} Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.626671 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.627768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.627813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.627832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.628135 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.655558 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.714373 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.715511 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.715563 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.715574 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:04 crc kubenswrapper[5108]: I0202 00:10:04.715605 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.716483 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 02 00:10:04 crc kubenswrapper[5108]: E0202 00:10:04.972821 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.337813 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.445136 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.633360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52"} Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.633426 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022"} Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.633440 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448"} Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.633625 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.634847 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.634897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.634914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.635279 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638071 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5"} Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638186 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638207 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638282 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638255 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639034 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639085 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.638973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639294 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:05 crc kubenswrapper[5108]: I0202 00:10:05.639480 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.639637 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.639968 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.639983 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:05 crc kubenswrapper[5108]: E0202 00:10:05.640144 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.036713 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.644538 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4" exitCode=0 Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.644806 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.644861 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.645265 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4"} Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.645467 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.645746 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.645795 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.645853 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646861 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646870 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646882 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646928 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646968 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.646985 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:06 crc kubenswrapper[5108]: E0202 00:10:06.647592 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:06 crc kubenswrapper[5108]: E0202 00:10:06.647594 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:06 crc kubenswrapper[5108]: E0202 00:10:06.648000 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:06 crc kubenswrapper[5108]: E0202 00:10:06.648404 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.652090 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:06 crc kubenswrapper[5108]: I0202 00:10:06.714672 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.176071 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.189279 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.594834 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.652887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24"} Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.652959 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658"} Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.652976 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b"} Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.653107 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.653196 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.653111 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.653298 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654851 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654872 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.654735 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.655059 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.655086 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:07 crc kubenswrapper[5108]: E0202 00:10:07.655327 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:07 crc kubenswrapper[5108]: E0202 00:10:07.655883 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:07 crc kubenswrapper[5108]: E0202 00:10:07.656309 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.959950 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.960958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.961007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.961023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:07 crc kubenswrapper[5108]: I0202 00:10:07.961054 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.014843 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.662163 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780"} Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.662247 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75"} Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.662383 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.662436 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.662514 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663685 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663854 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.663901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:08 crc kubenswrapper[5108]: E0202 00:10:08.663982 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:08 crc kubenswrapper[5108]: E0202 00:10:08.664420 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.664967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.665037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:08 crc kubenswrapper[5108]: I0202 00:10:08.665058 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:08 crc kubenswrapper[5108]: E0202 00:10:08.665473 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.381042 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.667183 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.667346 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.669747 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.669804 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.669822 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.670354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.670610 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.670758 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:09 crc kubenswrapper[5108]: E0202 00:10:09.670525 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:09 crc kubenswrapper[5108]: E0202 00:10:09.671648 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.956105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.956459 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.958162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.958250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:09 crc kubenswrapper[5108]: I0202 00:10:09.958270 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:09 crc kubenswrapper[5108]: E0202 00:10:09.958787 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:11 crc kubenswrapper[5108]: E0202 00:10:11.609418 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.136297 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.136585 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.137594 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.137628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.137648 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:12 crc kubenswrapper[5108]: E0202 00:10:12.137993 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.956789 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 02 00:10:12 crc kubenswrapper[5108]: I0202 00:10:12.956907 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 02 00:10:13 crc kubenswrapper[5108]: I0202 00:10:13.628347 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 00:10:13 crc kubenswrapper[5108]: I0202 00:10:13.628554 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:13 crc kubenswrapper[5108]: I0202 00:10:13.629280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:13 crc kubenswrapper[5108]: I0202 00:10:13.629319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:13 crc kubenswrapper[5108]: I0202 00:10:13.629332 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:13 crc kubenswrapper[5108]: E0202 00:10:13.629722 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:15 crc kubenswrapper[5108]: I0202 00:10:15.753790 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 00:10:15 crc kubenswrapper[5108]: I0202 00:10:15.753906 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 00:10:16 crc kubenswrapper[5108]: I0202 00:10:16.446489 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 00:10:16 crc kubenswrapper[5108]: I0202 00:10:16.652951 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 00:10:16 crc kubenswrapper[5108]: I0202 00:10:16.653037 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 00:10:17 crc kubenswrapper[5108]: I0202 00:10:17.270937 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 00:10:17 crc kubenswrapper[5108]: I0202 00:10:17.271013 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 00:10:17 crc kubenswrapper[5108]: E0202 00:10:17.699727 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 00:10:19 crc kubenswrapper[5108]: I0202 00:10:19.963603 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:19 crc kubenswrapper[5108]: I0202 00:10:19.963976 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:19 crc kubenswrapper[5108]: I0202 00:10:19.965310 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:19 crc kubenswrapper[5108]: I0202 00:10:19.965355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:19 crc kubenswrapper[5108]: I0202 00:10:19.965380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:19 crc kubenswrapper[5108]: E0202 00:10:19.965777 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:21 crc kubenswrapper[5108]: E0202 00:10:21.609860 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.663603 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.664019 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.665304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.665380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.665408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:21 crc kubenswrapper[5108]: E0202 00:10:21.666100 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.672008 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.704153 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.704313 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.706688 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.706794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:21 crc kubenswrapper[5108]: I0202 00:10:21.706825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:21 crc kubenswrapper[5108]: E0202 00:10:21.707869 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.286522 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.289412 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18904570221403ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,LastTimestamp:2026-02-02 00:10:01.466995694 +0000 UTC m=+0.742492634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.289633 5108 trace.go:236] Trace[1522540275]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 00:10:10.210) (total time: 12078ms): Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[1522540275]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 12078ms (00:10:22.289) Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[1522540275]: [12.078658966s] [12.078658966s] END Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.289691 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.290003 5108 trace.go:236] Trace[2010358375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 00:10:10.627) (total time: 11662ms): Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[2010358375]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 11662ms (00:10:22.289) Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[2010358375]: [11.662829754s] [11.662829754s] END Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.290041 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.290162 5108 trace.go:236] Trace[1953801145]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 00:10:10.069) (total time: 12220ms): Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[1953801145]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 12220ms (00:10:22.290) Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[1953801145]: [12.220211283s] [12.220211283s] END Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.290187 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.290542 5108 trace.go:236] Trace[554286650]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 00:10:09.339) (total time: 12951ms): Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[554286650]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12951ms (00:10:22.290) Feb 02 00:10:22 crc kubenswrapper[5108]: Trace[554286650]: [12.951124568s] [12.951124568s] END Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.290577 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.296780 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.297415 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.298784 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.305449 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.310957 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189045702a54bdd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.60545532 +0000 UTC m=+0.880952250,LastTimestamp:2026-02-02 00:10:01.60545532 +0000 UTC m=+0.880952250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.315672 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.315811 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.316298 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.316385 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.321808 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.658152959 +0000 UTC m=+0.933649899,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.329065 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.65818799 +0000 UTC m=+0.933684930,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.337818 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.658202231 +0000 UTC m=+0.933699171,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.344638 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.6600935 +0000 UTC m=+0.935590430,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.351667 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.660124551 +0000 UTC m=+0.935621481,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.358197 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.660136881 +0000 UTC m=+0.935633811,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.365695 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.660632374 +0000 UTC m=+0.936129334,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.373978 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.660670095 +0000 UTC m=+0.936167065,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.380119 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.660688165 +0000 UTC m=+0.936185125,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.387010 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.661401454 +0000 UTC m=+0.936898394,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.396039 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.661428294 +0000 UTC m=+0.936925234,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.402405 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.661442195 +0000 UTC m=+0.936939145,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.408502 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.661930607 +0000 UTC m=+0.937427577,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.413910 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.661969658 +0000 UTC m=+0.937466628,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.422161 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.661991359 +0000 UTC m=+0.937488329,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.430883 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.662980835 +0000 UTC m=+0.938477785,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.438768 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.663010016 +0000 UTC m=+0.938506956,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.443786 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d4669f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d4669f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546712735 +0000 UTC m=+0.822209685,LastTimestamp:2026-02-02 00:10:01.663026216 +0000 UTC m=+0.938523156,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.448479 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d379b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d379b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546652083 +0000 UTC m=+0.822149023,LastTimestamp:2026-02-02 00:10:01.663151439 +0000 UTC m=+0.938648369,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.457656 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.457599 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1890457026d415fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1890457026d415fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:01.546692094 +0000 UTC m=+0.822189044,LastTimestamp:2026-02-02 00:10:01.66317168 +0000 UTC m=+0.938668610,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.463519 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1890457046390c1d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.073402397 +0000 UTC m=+1.348899347,LastTimestamp:2026-02-02 00:10:02.073402397 +0000 UTC m=+1.348899347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.470764 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570463b43e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.073547744 +0000 UTC m=+1.349044714,LastTimestamp:2026-02-02 00:10:02.073547744 +0000 UTC m=+1.349044714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.476199 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457047da5e2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.100751916 +0000 UTC m=+1.376248846,LastTimestamp:2026-02-02 00:10:02.100751916 +0000 UTC m=+1.376248846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.480942 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570484f1797 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.108401559 +0000 UTC m=+1.383898519,LastTimestamp:2026-02-02 00:10:02.108401559 +0000 UTC m=+1.383898519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.485568 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1890457048dc505c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.117656668 +0000 UTC m=+1.393153638,LastTimestamp:2026-02-02 00:10:02.117656668 +0000 UTC m=+1.393153638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.493425 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457070a3a83d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.785032253 +0000 UTC m=+2.060529193,LastTimestamp:2026-02-02 00:10:02.785032253 +0000 UTC m=+2.060529193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.498981 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457070a4c1db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.785104347 +0000 UTC m=+2.060601277,LastTimestamp:2026-02-02 00:10:02.785104347 +0000 UTC m=+2.060601277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.503164 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570711fbf73 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.793164659 +0000 UTC m=+2.068661599,LastTimestamp:2026-02-02 00:10:02.793164659 +0000 UTC m=+2.068661599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.508208 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457071960844 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.800916548 +0000 UTC m=+2.076413478,LastTimestamp:2026-02-02 00:10:02.800916548 +0000 UTC m=+2.076413478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.514617 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457071ec8975 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.806585717 +0000 UTC m=+2.082082647,LastTimestamp:2026-02-02 00:10:02.806585717 +0000 UTC m=+2.082082647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.519679 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1890457071fcb79b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.807646107 +0000 UTC m=+2.083143037,LastTimestamp:2026-02-02 00:10:02.807646107 +0000 UTC m=+2.083143037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.527371 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189045707221f179 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.810085753 +0000 UTC m=+2.085582683,LastTimestamp:2026-02-02 00:10:02.810085753 +0000 UTC m=+2.085582683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.531562 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045707239f9ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.811660778 +0000 UTC m=+2.087157708,LastTimestamp:2026-02-02 00:10:02.811660778 +0000 UTC m=+2.087157708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.539301 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189045707242224b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.812195403 +0000 UTC m=+2.087692333,LastTimestamp:2026-02-02 00:10:02.812195403 +0000 UTC m=+2.087692333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.548472 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045707351a0ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:02.829988078 +0000 UTC m=+2.105485008,LastTimestamp:2026-02-02 00:10:02.829988078 +0000 UTC m=+2.105485008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.555638 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570865a4417 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.149321239 +0000 UTC m=+2.424818209,LastTimestamp:2026-02-02 00:10:03.149321239 +0000 UTC m=+2.424818209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.562128 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570872d57ef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.163154415 +0000 UTC m=+2.438651355,LastTimestamp:2026-02-02 00:10:03.163154415 +0000 UTC m=+2.438651355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.570714 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570874cda10 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.165219344 +0000 UTC m=+2.440716334,LastTimestamp:2026-02-02 00:10:03.165219344 +0000 UTC m=+2.440716334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.576858 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570928b817a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.35387481 +0000 UTC m=+2.629371780,LastTimestamp:2026-02-02 00:10:03.35387481 +0000 UTC m=+2.629371780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.588122 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570a0c9e4f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.592844536 +0000 UTC m=+2.868341476,LastTimestamp:2026-02-02 00:10:03.592844536 +0000 UTC m=+2.868341476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.594424 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570a11d486d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.598309485 +0000 UTC m=+2.873806425,LastTimestamp:2026-02-02 00:10:03.598309485 +0000 UTC m=+2.873806425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.602117 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570a1ae24a6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.607803046 +0000 UTC m=+2.883299986,LastTimestamp:2026-02-02 00:10:03.607803046 +0000 UTC m=+2.883299986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.606846 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570a1f45abe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:03.612404414 +0000 UTC m=+2.887901354,LastTimestamp:2026-02-02 00:10:03.612404414 +0000 UTC m=+2.887901354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.611763 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570c3666731 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.173526833 +0000 UTC m=+3.449023793,LastTimestamp:2026-02-02 00:10:04.173526833 +0000 UTC m=+3.449023793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.618663 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570c3ecdb91 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182338449 +0000 UTC m=+3.457835419,LastTimestamp:2026-02-02 00:10:04.182338449 +0000 UTC m=+3.457835419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.626510 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.626812 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.626836 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570c3ed92e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182385381 +0000 UTC m=+3.457882311,LastTimestamp:2026-02-02 00:10:04.182385381 +0000 UTC m=+3.457882311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.627897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.627981 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.628006 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.628666 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.632862 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.633747 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570c3f206b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.182677169 +0000 UTC m=+3.458174099,LastTimestamp:2026-02-02 00:10:04.182677169 +0000 UTC m=+3.458174099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.641115 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570c40bee67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.184374887 +0000 UTC m=+3.459871857,LastTimestamp:2026-02-02 00:10:04.184374887 +0000 UTC m=+3.459871857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.646556 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18904570cb7b55f5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.309116405 +0000 UTC m=+3.584613335,LastTimestamp:2026-02-02 00:10:04.309116405 +0000 UTC m=+3.584613335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.653261 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570cbcb66be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.314363582 +0000 UTC m=+3.589860522,LastTimestamp:2026-02-02 00:10:04.314363582 +0000 UTC m=+3.589860522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.661800 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570cbd2251e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.314805534 +0000 UTC m=+3.590302454,LastTimestamp:2026-02-02 00:10:04.314805534 +0000 UTC m=+3.590302454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.669462 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570cbd52a58 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.31500348 +0000 UTC m=+3.590500420,LastTimestamp:2026-02-02 00:10:04.31500348 +0000 UTC m=+3.590500420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.675885 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570cbde22d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.315591376 +0000 UTC m=+3.591088346,LastTimestamp:2026-02-02 00:10:04.315591376 +0000 UTC m=+3.591088346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.681810 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570cbdf5f3a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.315672378 +0000 UTC m=+3.591169318,LastTimestamp:2026-02-02 00:10:04.315672378 +0000 UTC m=+3.591169318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.686842 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570cbe09c26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.31575351 +0000 UTC m=+3.591250440,LastTimestamp:2026-02-02 00:10:04.31575351 +0000 UTC m=+3.591250440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.692393 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570db057576 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.569826678 +0000 UTC m=+3.845323608,LastTimestamp:2026-02-02 00:10:04.569826678 +0000 UTC m=+3.845323608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.697804 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570db26256e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.571968878 +0000 UTC m=+3.847465808,LastTimestamp:2026-02-02 00:10:04.571968878 +0000 UTC m=+3.847465808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.704020 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570db2fa13b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.572590395 +0000 UTC m=+3.848087325,LastTimestamp:2026-02-02 00:10:04.572590395 +0000 UTC m=+3.848087325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.707818 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709656 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" exitCode=255 Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52"} Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709903 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.709995 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710657 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.710787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.711533 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.711546 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:22 crc kubenswrapper[5108]: I0202 00:10:22.712035 5108 scope.go:117] "RemoveContainer" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.713342 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570dbd3ca1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.583348766 +0000 UTC m=+3.858845696,LastTimestamp:2026-02-02 00:10:04.583348766 +0000 UTC m=+3.858845696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.719756 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570dbe15639 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.584236601 +0000 UTC m=+3.859733531,LastTimestamp:2026-02-02 00:10:04.584236601 +0000 UTC m=+3.859733531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.725791 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904570dc355a3f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.589742655 +0000 UTC m=+3.865239585,LastTimestamp:2026-02-02 00:10:04.589742655 +0000 UTC m=+3.865239585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.731595 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570dc38aabf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.589959871 +0000 UTC m=+3.865456791,LastTimestamp:2026-02-02 00:10:04.589959871 +0000 UTC m=+3.865456791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.738599 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570dc498048 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.591063112 +0000 UTC m=+3.866560042,LastTimestamp:2026-02-02 00:10:04.591063112 +0000 UTC m=+3.866560042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.746298 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904570e47d61d9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.728680921 +0000 UTC m=+4.004177851,LastTimestamp:2026-02-02 00:10:04.728680921 +0000 UTC m=+4.004177851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.751980 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e83df3ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.791632812 +0000 UTC m=+4.067129742,LastTimestamp:2026-02-02 00:10:04.791632812 +0000 UTC m=+4.067129742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.757819 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570e862eb1a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.79405545 +0000 UTC m=+4.069552380,LastTimestamp:2026-02-02 00:10:04.79405545 +0000 UTC m=+4.069552380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.764741 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e9153190 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.805738896 +0000 UTC m=+4.081235826,LastTimestamp:2026-02-02 00:10:04.805738896 +0000 UTC m=+4.081235826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.769181 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570e92a94e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.807140576 +0000 UTC m=+4.082637506,LastTimestamp:2026-02-02 00:10:04.807140576 +0000 UTC m=+4.082637506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.780457 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18904570e9326f3e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:04.80765523 +0000 UTC m=+4.083152160,LastTimestamp:2026-02-02 00:10:04.80765523 +0000 UTC m=+4.083152160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.784787 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f58e0054 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.01498274 +0000 UTC m=+4.290479670,LastTimestamp:2026-02-02 00:10:05.01498274 +0000 UTC m=+4.290479670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.789277 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f683012f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.031039279 +0000 UTC m=+4.306536209,LastTimestamp:2026-02-02 00:10:05.031039279 +0000 UTC m=+4.306536209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.796421 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.804394 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.809139 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.812199 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045711add2f87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.640929159 +0000 UTC m=+4.916426089,LastTimestamp:2026-02-02 00:10:05.640929159 +0000 UTC m=+4.916426089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.818398 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045712a082270 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.895402096 +0000 UTC m=+5.170899036,LastTimestamp:2026-02-02 00:10:05.895402096 +0000 UTC m=+5.170899036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.823306 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045712b033e6d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.911858797 +0000 UTC m=+5.187355727,LastTimestamp:2026-02-02 00:10:05.911858797 +0000 UTC m=+5.187355727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.830511 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457157009ed3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.649884371 +0000 UTC m=+5.925381331,LastTimestamp:2026-02-02 00:10:06.649884371 +0000 UTC m=+5.925381331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.835151 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457166f7d399 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.917743513 +0000 UTC m=+6.193240453,LastTimestamp:2026-02-02 00:10:06.917743513 +0000 UTC m=+6.193240453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.843748 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457167e76251 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.933443153 +0000 UTC m=+6.208940093,LastTimestamp:2026-02-02 00:10:06.933443153 +0000 UTC m=+6.208940093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.852765 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045716801e724 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:06.935181092 +0000 UTC m=+6.210678062,LastTimestamp:2026-02-02 00:10:06.935181092 +0000 UTC m=+6.210678062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.862364 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045717850d4b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.208789174 +0000 UTC m=+6.484286144,LastTimestamp:2026-02-02 00:10:07.208789174 +0000 UTC m=+6.484286144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.869910 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457179698997 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.227185559 +0000 UTC m=+6.502682519,LastTimestamp:2026-02-02 00:10:07.227185559 +0000 UTC m=+6.502682519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.877856 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045717981089a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.228725402 +0000 UTC m=+6.504222342,LastTimestamp:2026-02-02 00:10:07.228725402 +0000 UTC m=+6.504222342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.884356 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1890457189184941 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.490296129 +0000 UTC m=+6.765793109,LastTimestamp:2026-02-02 00:10:07.490296129 +0000 UTC m=+6.765793109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.891425 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045718a6772cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.512261324 +0000 UTC m=+6.787758294,LastTimestamp:2026-02-02 00:10:07.512261324 +0000 UTC m=+6.787758294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.898977 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045718a7e1299 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.513744025 +0000 UTC m=+6.789240965,LastTimestamp:2026-02-02 00:10:07.513744025 +0000 UTC m=+6.789240965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.905680 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719b0e4149 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.791628617 +0000 UTC m=+7.067125567,LastTimestamp:2026-02-02 00:10:07.791628617 +0000 UTC m=+7.067125567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.912196 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719c21f473 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.809696883 +0000 UTC m=+7.085193843,LastTimestamp:2026-02-02 00:10:07.809696883 +0000 UTC m=+7.085193843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.914390 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189045719c469bd7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:07.812099031 +0000 UTC m=+7.087595971,LastTimestamp:2026-02-02 00:10:07.812099031 +0000 UTC m=+7.087595971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.918528 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904571adaeac91 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:08.104131729 +0000 UTC m=+7.379628699,LastTimestamp:2026-02-02 00:10:08.104131729 +0000 UTC m=+7.379628699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.920912 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18904571aedb7dac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:08.12384606 +0000 UTC m=+7.399343030,LastTimestamp:2026-02-02 00:10:08.12384606 +0000 UTC m=+7.399343030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.924219 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18904572ceed8866 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:12.956866662 +0000 UTC m=+12.232363622,LastTimestamp:2026-02-02 00:10:12.956866662 +0000 UTC m=+12.232363622,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.928552 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18904572ceef42e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:12.956979945 +0000 UTC m=+12.232476905,LastTimestamp:2026-02-02 00:10:12.956979945 +0000 UTC m=+12.232476905,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.934426 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.1890457375a463fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:15.753868282 +0000 UTC m=+15.029365212,LastTimestamp:2026-02-02 00:10:15.753868282 +0000 UTC m=+15.029365212,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.938335 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457375a569f0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:15.753935344 +0000 UTC m=+15.029432284,LastTimestamp:2026-02-02 00:10:15.753935344 +0000 UTC m=+15.029432284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.943752 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904573ab3c335f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:16.653009759 +0000 UTC m=+15.928506699,LastTimestamp:2026-02-02 00:10:16.653009759 +0000 UTC m=+15.928506699,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.951605 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904573ab3d0574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:16.65306354 +0000 UTC m=+15.928560480,LastTimestamp:2026-02-02 00:10:16.65306354 +0000 UTC m=+15.928560480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.959607 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904573d011d060 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 02 00:10:22 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 00:10:22 crc kubenswrapper[5108]: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:17.270988896 +0000 UTC m=+16.546485836,LastTimestamp:2026-02-02 00:10:17.270988896 +0000 UTC m=+16.546485836,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.967707 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904573d0128dd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:17.271037397 +0000 UTC m=+16.546534337,LastTimestamp:2026-02-02 00:10:17.271037397 +0000 UTC m=+16.546534337,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.977228 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904574fcc2c49d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.315750557 +0000 UTC m=+21.591247517,LastTimestamp:2026-02-02 00:10:22.315750557 +0000 UTC m=+21.591247517,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.987281 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904574fcc44f78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46118->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.31585164 +0000 UTC m=+21.591348600,LastTimestamp:2026-02-02 00:10:22.31585164 +0000 UTC m=+21.591348600,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.991792 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 02 00:10:22 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18904574fccbf587 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 02 00:10:22 crc kubenswrapper[5108]: body: Feb 02 00:10:22 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.316352903 +0000 UTC m=+21.591849843,LastTimestamp:2026-02-02 00:10:22.316352903 +0000 UTC m=+21.591849843,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 02 00:10:22 crc kubenswrapper[5108]: > Feb 02 00:10:22 crc kubenswrapper[5108]: E0202 00:10:22.996641 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904574fccce39f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:22.316413855 +0000 UTC m=+21.591910795,LastTimestamp:2026-02-02 00:10:22.316413855 +0000 UTC m=+21.591910795,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.001219 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18904570f691e318\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:22.713465394 +0000 UTC m=+21.988962324,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.006225 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1890457102b460df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:22.983679855 +0000 UTC m=+22.259176785,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.013705 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045710387965a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:23.002223606 +0000 UTC m=+22.277720576,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.452603 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.678048 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.678393 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680074 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.680096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.680843 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.697660 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.716474 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.719627 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.719963 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22"} Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720175 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720903 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720917 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720940 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:23 crc kubenswrapper[5108]: I0202 00:10:23.720954 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.721468 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:23 crc kubenswrapper[5108]: E0202 00:10:23.721695 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.106170 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.450506 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.724534 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.725784 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.727982 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" exitCode=255 Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728041 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22"} Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728089 5108 scope.go:117] "RemoveContainer" containerID="3087a7daace8c6ad8a6d2570530f65d5e7ee3065879cb91a75a26f38ff7a8f52" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.728328 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.729383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.729943 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:24 crc kubenswrapper[5108]: I0202 00:10:24.730417 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.730802 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:24 crc kubenswrapper[5108]: E0202 00:10:24.736034 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.450554 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.732262 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.753067 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.753331 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754261 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.754456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.755071 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:25 crc kubenswrapper[5108]: I0202 00:10:25.755498 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.755791 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:25 crc kubenswrapper[5108]: E0202 00:10:25.760803 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:25.755750855 +0000 UTC m=+25.031247785,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:26 crc kubenswrapper[5108]: I0202 00:10:26.451560 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:27 crc kubenswrapper[5108]: I0202 00:10:27.452012 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.449695 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.698472 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:28 crc kubenswrapper[5108]: I0202 00:10:28.700682 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:28 crc kubenswrapper[5108]: E0202 00:10:28.715514 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:29 crc kubenswrapper[5108]: I0202 00:10:29.451376 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:30 crc kubenswrapper[5108]: E0202 00:10:30.362058 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:30 crc kubenswrapper[5108]: I0202 00:10:30.452596 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:31 crc kubenswrapper[5108]: E0202 00:10:31.116351 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:31 crc kubenswrapper[5108]: I0202 00:10:31.453373 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:31 crc kubenswrapper[5108]: E0202 00:10:31.610461 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:32 crc kubenswrapper[5108]: I0202 00:10:32.452112 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:32 crc kubenswrapper[5108]: E0202 00:10:32.874680 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.447168 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.720119 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.721275 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.721804 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.723547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.724488 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:33 crc kubenswrapper[5108]: I0202 00:10:33.725140 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.725681 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:33 crc kubenswrapper[5108]: E0202 00:10:33.732175 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:33.725595416 +0000 UTC m=+33.001092386,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:34 crc kubenswrapper[5108]: I0202 00:10:34.453478 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:34 crc kubenswrapper[5108]: E0202 00:10:34.946135 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.453217 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.715956 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:35 crc kubenswrapper[5108]: I0202 00:10:35.718141 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:35 crc kubenswrapper[5108]: E0202 00:10:35.734924 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:36 crc kubenswrapper[5108]: I0202 00:10:36.450358 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:37 crc kubenswrapper[5108]: I0202 00:10:37.453357 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:38 crc kubenswrapper[5108]: E0202 00:10:38.126832 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:38 crc kubenswrapper[5108]: I0202 00:10:38.452737 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:39 crc kubenswrapper[5108]: I0202 00:10:39.453966 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:40 crc kubenswrapper[5108]: I0202 00:10:40.452621 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:41 crc kubenswrapper[5108]: I0202 00:10:41.452872 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:41 crc kubenswrapper[5108]: E0202 00:10:41.611103 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.452501 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.735596 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.736910 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.736983 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.737004 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:42 crc kubenswrapper[5108]: I0202 00:10:42.737047 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:42 crc kubenswrapper[5108]: E0202 00:10:42.754028 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:43 crc kubenswrapper[5108]: I0202 00:10:43.453511 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:44 crc kubenswrapper[5108]: I0202 00:10:44.452355 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:45 crc kubenswrapper[5108]: E0202 00:10:45.131971 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:45 crc kubenswrapper[5108]: I0202 00:10:45.453691 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:46 crc kubenswrapper[5108]: E0202 00:10:46.195947 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 02 00:10:46 crc kubenswrapper[5108]: I0202 00:10:46.452666 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.421649 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.453094 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.557327 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.558710 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.559414 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.559839 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.570678 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18904570f691e318\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18904570f691e318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.032014616 +0000 UTC m=+4.307511556,LastTimestamp:2026-02-02 00:10:47.562973506 +0000 UTC m=+46.838470436,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.761811 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 02 00:10:47 crc kubenswrapper[5108]: I0202 00:10:47.812457 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.814209 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1890457102b460df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1890457102b460df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.235601631 +0000 UTC m=+4.511098561,LastTimestamp:2026-02-02 00:10:47.80757122 +0000 UTC m=+47.083068150,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:47 crc kubenswrapper[5108]: E0202 00:10:47.833866 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045710387965a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045710387965a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:05.249443418 +0000 UTC m=+4.524940348,LastTimestamp:2026-02-02 00:10:47.822506536 +0000 UTC m=+47.098003466,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.450108 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.822879 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.824099 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826483 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" exitCode=255 Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b"} Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826562 5108 scope.go:117] "RemoveContainer" containerID="45a49c5807370f54bb53c951b3f111cc9ffd3a15027a2be5dd9e43a6d59e3f22" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.826957 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827858 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.827957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.828532 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:48 crc kubenswrapper[5108]: I0202 00:10:48.828907 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.829336 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:48 crc kubenswrapper[5108]: E0202 00:10:48.842973 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:48.829266091 +0000 UTC m=+48.104763061,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.450962 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.755080 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756580 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756644 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756692 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.756738 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:49 crc kubenswrapper[5108]: E0202 00:10:49.776980 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:49 crc kubenswrapper[5108]: I0202 00:10:49.834551 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:10:50 crc kubenswrapper[5108]: I0202 00:10:50.451555 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:51 crc kubenswrapper[5108]: I0202 00:10:51.452325 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:51 crc kubenswrapper[5108]: E0202 00:10:51.612143 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:10:52 crc kubenswrapper[5108]: E0202 00:10:52.142859 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:52 crc kubenswrapper[5108]: I0202 00:10:52.452459 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.452608 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.721286 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.721548 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.722880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.723026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.723054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.723811 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:53 crc kubenswrapper[5108]: I0202 00:10:53.724489 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.724992 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:53 crc kubenswrapper[5108]: E0202 00:10:53.733444 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:53.724929747 +0000 UTC m=+53.000426717,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:54 crc kubenswrapper[5108]: I0202 00:10:54.453807 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.449514 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.752304 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.752794 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.754193 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.754893 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:55 crc kubenswrapper[5108]: I0202 00:10:55.755422 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.755820 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:10:55 crc kubenswrapper[5108]: E0202 00:10:55.763721 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189045758cb48d24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189045758cb48d24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:10:24.730737956 +0000 UTC m=+24.006234886,LastTimestamp:2026-02-02 00:10:55.755756658 +0000 UTC m=+55.031253628,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.452378 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.777805 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.779968 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780170 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:56 crc kubenswrapper[5108]: I0202 00:10:56.780343 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:10:56 crc kubenswrapper[5108]: E0202 00:10:56.793050 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:10:57 crc kubenswrapper[5108]: E0202 00:10:57.274663 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.452535 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.664410 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.664718 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666837 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:10:57 crc kubenswrapper[5108]: I0202 00:10:57.666856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:10:57 crc kubenswrapper[5108]: E0202 00:10:57.667382 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:10:58 crc kubenswrapper[5108]: I0202 00:10:58.453776 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:10:59 crc kubenswrapper[5108]: E0202 00:10:59.152943 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:10:59 crc kubenswrapper[5108]: I0202 00:10:59.453965 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:00 crc kubenswrapper[5108]: I0202 00:11:00.452483 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:01 crc kubenswrapper[5108]: I0202 00:11:01.452020 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:01 crc kubenswrapper[5108]: E0202 00:11:01.613400 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:11:02 crc kubenswrapper[5108]: I0202 00:11:02.452307 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.455013 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.793992 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:03 crc kubenswrapper[5108]: I0202 00:11:03.795832 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:11:03 crc kubenswrapper[5108]: E0202 00:11:03.809826 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 02 00:11:04 crc kubenswrapper[5108]: I0202 00:11:04.454348 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:05 crc kubenswrapper[5108]: I0202 00:11:05.453438 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:06 crc kubenswrapper[5108]: E0202 00:11:06.162218 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 02 00:11:06 crc kubenswrapper[5108]: I0202 00:11:06.452790 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.452386 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.635004 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-nqwjk" Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.644311 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-nqwjk" Feb 02 00:11:07 crc kubenswrapper[5108]: I0202 00:11:07.682601 5108 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.273347 5108 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.646466 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-04 00:06:07 +0000 UTC" deadline="2026-02-27 01:31:16.615926221 +0000 UTC" Feb 02 00:11:08 crc kubenswrapper[5108]: I0202 00:11:08.646613 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="601h20m7.96932356s" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.810472 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812017 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812106 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.812364 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.823655 5108 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.824045 5108 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.824073 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827561 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827573 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.827607 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.840155 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852187 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852210 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.852247 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.905991 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915055 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.915190 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.931792 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939756 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:10 crc kubenswrapper[5108]: I0202 00:11:10.939771 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:10Z","lastTransitionTime":"2026-02-02T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.952995 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.953170 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:10 crc kubenswrapper[5108]: E0202 00:11:10.953218 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.053926 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.154367 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.255557 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.355975 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.457053 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.557294 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.557803 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.558459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.559295 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:11 crc kubenswrapper[5108]: I0202 00:11:11.559756 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.614575 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.658630 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.759337 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.859591 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:11 crc kubenswrapper[5108]: E0202 00:11:11.960587 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.061518 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.162658 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.263476 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.304939 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.306775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0"} Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307099 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307779 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:12 crc kubenswrapper[5108]: I0202 00:11:12.307839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.308294 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.364071 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.464803 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.566035 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.667020 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.768101 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.868311 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:12 crc kubenswrapper[5108]: E0202 00:11:12.969316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.070072 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.170887 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.272072 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.373113 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.473684 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.574547 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.675328 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.776516 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.877097 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:13 crc kubenswrapper[5108]: E0202 00:11:13.977552 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.078746 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.179854 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.280424 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.315118 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.316338 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318608 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" exitCode=255 Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318722 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0"} Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.318824 5108 scope.go:117] "RemoveContainer" containerID="faf0cf79ed7c7e46ca49f30960c784e137edfe716bfe296cbe9017a8f0728b4b" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319127 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.319991 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.320011 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.320723 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:14 crc kubenswrapper[5108]: I0202 00:11:14.321171 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.321534 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.381861 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.482790 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.583311 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.683794 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.784585 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.884745 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:14 crc kubenswrapper[5108]: E0202 00:11:14.985833 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.086714 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.187790 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.288606 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.322990 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.389162 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.490062 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.590724 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.690907 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.752318 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.752661 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.753829 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.755850 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:15 crc kubenswrapper[5108]: I0202 00:11:15.756523 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.756995 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.791954 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.893074 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:15 crc kubenswrapper[5108]: E0202 00:11:15.993674 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.094686 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.195581 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.296104 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.396531 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.497488 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.597879 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.698191 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.798433 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.899219 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:16 crc kubenswrapper[5108]: E0202 00:11:16.999623 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.100659 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.201293 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.302264 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.403377 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.504187 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.605334 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.706020 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.806579 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:17 crc kubenswrapper[5108]: E0202 00:11:17.907462 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.008103 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.108960 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.210135 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.310385 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.411465 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.512016 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.557061 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558306 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:18 crc kubenswrapper[5108]: I0202 00:11:18.558328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.558879 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.612347 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.712528 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.812829 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:18 crc kubenswrapper[5108]: E0202 00:11:18.913297 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.013561 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.113957 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.215017 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.315666 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.416250 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.516433 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.616725 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.717905 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.818699 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: E0202 00:11:19.918946 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.922706 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.971855 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 02 00:11:19 crc kubenswrapper[5108]: I0202 00:11:19.988537 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022116 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.022181 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.091260 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.124912 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.124995 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125059 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.125083 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.190124 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228611 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228642 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.228653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.290171 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331817 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.331843 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.332417 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.332446 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446745 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.446783 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.463508 5108 apiserver.go:52] "Watching apiserver" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.470434 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.471013 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-d74m7","openshift-multus/multus-q22wv","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-66k84","openshift-image-registry/node-ca-r6t6x","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-xdw92","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-additional-cni-plugins-gbldp","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/network-metrics-daemon-26ppl","openshift-network-diagnostics/network-check-target-fhkjl"] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.473347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.473677 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.474032 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.474787 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.475275 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.477702 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478715 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478866 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.478897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.480310 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.480615 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.480892 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483179 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483573 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.483602 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484179 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484195 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.484963 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.490118 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.492903 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.493931 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.494509 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.495113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.496098 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.498508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499038 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499676 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.499863 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500196 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500321 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.500613 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.503276 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.506984 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.509736 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.511633 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.511794 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.513138 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.518693 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.519697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.522006 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.522913 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.523196 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.527940 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.530828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.531061 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.530924 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.533930 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.535922 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.536314 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.536772 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.539043 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541360 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541376 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541546 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.541560 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.542437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.542608 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.543057 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.543388 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.543726 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.543755 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.548531 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.548826 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.549184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.555197 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571050 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571407 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571476 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571503 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.571531 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.574473 5108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.574602 5108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.581385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584802 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584839 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584855 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.584941 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.084916243 +0000 UTC m=+80.360413183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.585524 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.594586 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.597478 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.609941 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.629700 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.641930 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653094 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.653720 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.655537 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.665543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672441 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672480 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672512 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672544 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672572 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672595 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672620 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672719 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672769 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672793 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672843 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672869 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672894 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672941 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.672990 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673035 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673081 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673147 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673182 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673248 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673326 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673349 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673376 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673446 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673499 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673620 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673671 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673701 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673725 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673749 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673773 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673842 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673865 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673888 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673912 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673968 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.673992 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674020 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674043 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674084 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674111 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674258 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674310 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674336 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674385 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674409 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674472 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674518 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674544 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674569 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674650 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674703 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674728 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674753 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674775 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674824 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674849 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674897 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674947 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.674995 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675024 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675127 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675151 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675199 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675240 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675292 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675317 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675417 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675490 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675527 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.675564 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676060 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676076 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676346 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676536 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676777 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.676996 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677133 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.677948 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678613 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678677 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678811 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.678924 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679182 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.679209 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.179164679 +0000 UTC m=+80.454661649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679384 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679691 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.679819 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680135 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680249 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.680731 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681131 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681406 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.681807 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682160 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682202 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682413 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682476 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.682759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683125 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683456 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683622 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683672 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683711 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683745 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683773 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683797 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683824 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683878 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683901 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683943 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683964 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.683986 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684004 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684036 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684055 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684638 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.684952 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685174 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685421 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685430 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.685955 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686041 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686050 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686074 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686339 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686738 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686878 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.686966 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687010 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687113 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687178 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687161 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687373 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.687935 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688281 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688345 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688392 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688524 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688580 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.688985 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689030 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689079 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689121 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689159 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689201 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689339 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689421 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689459 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689548 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689588 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689599 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689629 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689683 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689723 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689767 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689810 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689850 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690106 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690203 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690313 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690369 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690418 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692069 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692112 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692163 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692185 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692244 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692288 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692342 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692389 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692410 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694257 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694309 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694376 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694415 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694452 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695726 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695760 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695789 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695818 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696062 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696110 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696137 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696162 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696200 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696248 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696294 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696312 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696333 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696382 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696405 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696570 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696612 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696655 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696678 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696750 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696774 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696794 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696814 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696859 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696878 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696899 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697133 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697157 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698223 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698373 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698557 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698629 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701928 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701971 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702010 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702054 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702130 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702172 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702223 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689967 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702311 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689979 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690067 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.689967 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.690079 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691166 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691280 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691881 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.691900 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692661 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692807 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692898 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.692980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.693185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.693213 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694284 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.694937 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695192 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695156 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695293 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695401 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695453 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.695726 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.696908 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697146 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697371 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697389 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697393 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697406 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697595 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697668 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.697915 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698029 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698394 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698761 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698748 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698785 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698940 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699018 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699067 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699274 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699325 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699565 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702757 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699703 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699599 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703508 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703645 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703682 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.699883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700379 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700405 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.700772 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701065 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701566 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701891 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701998 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.701906 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702197 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702683 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.702700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.698841 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.703037 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704379 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.704680 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705068 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705461 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705563 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705608 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705636 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705653 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.705900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706270 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706371 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706480 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706536 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706578 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706658 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.706993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707219 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.707362 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.707493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.207465888 +0000 UTC m=+80.482962818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707702 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707736 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707841 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707871 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.707992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708090 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708127 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708308 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.708385 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.708493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.208464924 +0000 UTC m=+80.483961874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708711 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708729 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708746 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708777 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708920 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708946 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.708989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709211 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709250 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709265 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709385 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709448 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709503 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709528 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709577 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709595 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709609 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709693 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709709 5108 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709724 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709886 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709901 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.709914 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710322 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710369 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.710384 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711049 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711071 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711133 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711148 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711163 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711178 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711196 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711210 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711261 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711280 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711299 5108 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711317 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711330 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711346 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711359 5108 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711373 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711386 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711399 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711412 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711427 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711440 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711455 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711470 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711484 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711500 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711516 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711529 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711546 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711559 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711574 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711562 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711587 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711699 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711722 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711744 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711765 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711785 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711805 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711824 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711845 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711866 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711886 5108 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711908 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711927 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711946 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711967 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.711985 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712003 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712022 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712040 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712057 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712077 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712096 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712114 5108 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712135 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712155 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712174 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712219 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712269 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712338 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712359 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712380 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712399 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712421 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712442 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712459 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712478 5108 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712499 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712519 5108 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712544 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712572 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712597 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712676 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712698 5108 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712717 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712722 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712739 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712816 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712843 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712864 5108 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712884 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712910 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712932 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712960 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.712982 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713003 5108 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713025 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713047 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713067 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713086 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713142 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713165 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713306 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713366 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713390 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713410 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713432 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713454 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713475 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713494 5108 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713516 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713536 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713555 5108 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713576 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713596 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713614 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713654 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713672 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713695 5108 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713714 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713734 5108 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713753 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713771 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713790 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713808 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713827 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713846 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713864 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713881 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713927 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713945 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713962 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.713981 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.714038 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.714369 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.715218 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.716617 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.718064 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.718082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.719903 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720210 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720793 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721021 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.720995 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721171 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.721438 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722257 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722303 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.722949 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.726317 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727147 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727908 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727934 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.727985 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728021 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728266 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728381 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728471 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.728645 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.228614718 +0000 UTC m=+80.504111768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728863 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728862 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728919 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.728932 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.734883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735089 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735386 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735544 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.735700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736044 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736084 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736192 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736416 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736435 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.736682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737385 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737629 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.737843 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738112 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738361 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738376 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738462 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738485 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738587 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738866 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.738936 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739261 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739453 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739479 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.739980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740050 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740265 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740405 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740563 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740707 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.740808 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.741014 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.741329 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.742552 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.742871 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743104 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743118 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.743288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744534 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744576 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.744715 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.747187 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.747259 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.751548 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.752409 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760262 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.760601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.774104 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.775130 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.785074 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.786124 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.794064 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.799336 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.800401 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.813405 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815170 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815247 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815270 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815285 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815341 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815360 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815431 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815464 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815482 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815519 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815593 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815611 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815651 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815667 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815752 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815785 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815801 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815819 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815834 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815801 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816014 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-system-cni-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.815902 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816241 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816321 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816391 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816438 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-netns\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cnibin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-multus\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816833 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ddd95e62-4b23-4887-b6e7-364a01924524-host\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-os-release\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816849 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-socket-dir-parent\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-conf-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816917 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-cni-bin\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5434f05-9acb-4d0c-a175-d5efc97194da-hosts-file\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.816942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817059 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817109 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817307 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817386 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-system-cni-dir\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817814 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-multus-certs\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817914 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817957 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cnibin\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.817991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818031 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818060 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-hostroot\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818067 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818156 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.818496 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818536 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93334c92-cf5f-4978-b891-2b8e5ea35025-mcd-auth-proxy-config\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818567 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818649 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818799 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818880 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.818975 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819087 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819131 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.819321 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819340 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.819375 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:21.319357871 +0000 UTC m=+80.594854801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.819416 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820292 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820438 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820509 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5434f05-9acb-4d0c-a175-d5efc97194da-tmp-dir\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.820578 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820668 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-etc-kubernetes\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820694 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-var-lib-kubelet\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-os-release\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.820795 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-host-run-k8s-cni-cncf-io\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821154 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821809 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/93334c92-cf5f-4978-b891-2b8e5ea35025-rootfs\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821876 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-multus-daemon-config\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.821940 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-cni-binary-copy\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822025 5108 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822088 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822102 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822115 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822125 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822139 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822152 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822170 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822180 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822190 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822199 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822209 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822218 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822246 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822256 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822268 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822280 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822290 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822627 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.822299 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823377 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823390 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823401 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823411 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823420 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823430 5108 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823440 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823449 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823459 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823529 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823541 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823553 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.823563 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824309 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824323 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824334 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824344 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824353 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824363 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824374 5108 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824385 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824395 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.824404 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833209 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833370 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93334c92-cf5f-4978-b891-2b8e5ea35025-proxy-tls\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833548 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833967 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834073 5108 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834167 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834283 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834416 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834498 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834567 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834753 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834836 5108 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834957 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835030 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835087 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835147 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.833793 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835262 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834220 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.834047 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835445 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835569 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835583 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835594 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835606 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835617 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835629 5108 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835640 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835652 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835663 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835673 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835684 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835698 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835710 5108 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835720 5108 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835734 5108 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835747 5108 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835757 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835768 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835778 5108 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835789 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835801 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.835811 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.839355 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ddd95e62-4b23-4887-b6e7-364a01924524-serviceca\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.840004 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8fbr\" (UniqueName: \"kubernetes.io/projected/ddd95e62-4b23-4887-b6e7-364a01924524-kube-api-access-d8fbr\") pod \"node-ca-r6t6x\" (UID: \"ddd95e62-4b23-4887-b6e7-364a01924524\") " pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.840451 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6t6x" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.840894 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 02 00:11:20 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 02 00:11:20 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 02 00:11:20 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 02 00:11:20 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ho_enable} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:20 crc kubenswrapper[5108]: --disable-approver \ Feb 02 00:11:20 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 02 00:11:20 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.842153 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kbg\" (UniqueName: \"kubernetes.io/projected/f5434f05-9acb-4d0c-a175-d5efc97194da-kube-api-access-g2kbg\") pod \"node-resolver-xdw92\" (UID: \"f5434f05-9acb-4d0c-a175-d5efc97194da\") " pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.843287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"ovnkube-control-plane-57b78d8988-ccnbr\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.843393 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg4q\" (UniqueName: \"kubernetes.io/projected/24f8cedc-9b82-4ef7-a7db-4ce57803e0ce-kube-api-access-vfg4q\") pod \"multus-q22wv\" (UID: \"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\") " pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.846439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtcp\" (UniqueName: \"kubernetes.io/projected/f77c18f0-131e-482e-8e09-602b39b0c163-kube-api-access-mxtcp\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.846541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26ft\" (UniqueName: \"kubernetes.io/projected/93334c92-cf5f-4978-b891-2b8e5ea35025-kube-api-access-w26ft\") pod \"machine-config-daemon-d74m7\" (UID: \"93334c92-cf5f-4978-b891-2b8e5ea35025\") " pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.847144 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"ovnkube-node-66k84\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.847559 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9m5\" (UniqueName: \"kubernetes.io/projected/131f7f53-e6cd-4e60-87d5-5a67b6f40b76-kube-api-access-ft9m5\") pod \"multus-additional-cni-plugins-gbldp\" (UID: \"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\") " pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.850742 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.853359 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd95e62_4b23_4887_b6e7_364a01924524.slice/crio-896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6 WatchSource:0}: Error finding container 896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6: Status 404 returned error can't find the container with id 896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6 Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.854980 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde WatchSource:0}: Error finding container 5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde: Status 404 returned error can't find the container with id 5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.856613 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 02 00:11:20 crc kubenswrapper[5108]: while [ true ]; Feb 02 00:11:20 crc kubenswrapper[5108]: do Feb 02 00:11:20 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 02 00:11:20 crc kubenswrapper[5108]: echo $f Feb 02 00:11:20 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 02 00:11:20 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 02 00:11:20 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 02 00:11:20 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 02 00:11:20 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 02 00:11:20 crc kubenswrapper[5108]: echo $d Feb 02 00:11:20 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 02 00:11:20 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 02 00:11:20 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8fbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-r6t6x_openshift-image-registry(ddd95e62-4b23-4887-b6e7-364a01924524): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.859824 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --disable-webhook \ Feb 02 00:11:20 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.859904 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-r6t6x" podUID="ddd95e62-4b23-4887-b6e7-364a01924524" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.859988 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.861145 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.861525 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.865493 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865534 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.865614 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.866831 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.871135 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.873379 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q22wv" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.874406 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0298f7da_43a3_48a4_8e32_b772a82bd62d.slice/crio-b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058 WatchSource:0}: Error finding container b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058: Status 404 returned error can't find the container with id b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058 Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.878600 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -euo pipefail Feb 02 00:11:20 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 02 00:11:20 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 02 00:11:20 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 02 00:11:20 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 02 00:11:20 crc kubenswrapper[5108]: TS=$(date +%s) Feb 02 00:11:20 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 02 00:11:20 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: log_missing_certs(){ Feb 02 00:11:20 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 02 00:11:20 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 02 00:11:20 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: } Feb 02 00:11:20 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 02 00:11:20 crc kubenswrapper[5108]: log_missing_certs Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 5 Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 02 00:11:20 crc kubenswrapper[5108]: --logtostderr \ Feb 02 00:11:20 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 02 00:11:20 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.882883 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:20 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 02 00:11:20 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 02 00:11:20 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 02 00:11:20 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 02 00:11:20 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 02 00:11:20 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 02 00:11:20 crc kubenswrapper[5108]: else Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 02 00:11:20 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:20 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 02 00:11:20 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 02 00:11:20 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-multicast \ Feb 02 00:11:20 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 02 00:11:20 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.883968 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.883986 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f8cedc_9b82_4ef7_a7db_4ce57803e0ce.slice/crio-61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4 WatchSource:0}: Error finding container 61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4: Status 404 returned error can't find the container with id 61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4 Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.886112 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdw92" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.886609 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 02 00:11:20 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 02 00:11:20 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-q22wv_openshift-multus(24f8cedc-9b82-4ef7-a7db-4ce57803e0ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.888479 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-q22wv" podUID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.890128 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.900351 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:20 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:20 crc kubenswrapper[5108]: set -uo pipefail Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 02 00:11:20 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 02 00:11:20 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 02 00:11:20 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 02 00:11:20 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 02 00:11:20 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: while true; do Feb 02 00:11:20 crc kubenswrapper[5108]: declare -A svc_ips Feb 02 00:11:20 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 02 00:11:20 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 02 00:11:20 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 02 00:11:20 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 02 00:11:20 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 02 00:11:20 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:20 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 02 00:11:20 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 02 00:11:20 crc kubenswrapper[5108]: do Feb 02 00:11:20 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 02 00:11:20 crc kubenswrapper[5108]: break Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 02 00:11:20 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 02 00:11:20 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 02 00:11:20 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 02 00:11:20 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: continue Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # Append resolver entries for services Feb 02 00:11:20 crc kubenswrapper[5108]: rc=0 Feb 02 00:11:20 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 02 00:11:20 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 02 00:11:20 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: continue Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: Feb 02 00:11:20 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 02 00:11:20 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 02 00:11:20 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 02 00:11:20 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 02 00:11:20 crc kubenswrapper[5108]: fi Feb 02 00:11:20 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:20 crc kubenswrapper[5108]: unset svc_ips Feb 02 00:11:20 crc kubenswrapper[5108]: done Feb 02 00:11:20 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xdw92_openshift-dns(f5434f05-9acb-4d0c-a175-d5efc97194da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:20 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.901430 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xdw92" podUID="f5434f05-9acb-4d0c-a175-d5efc97194da" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.903304 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.911318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbldp" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.914086 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.918273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.924469 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131f7f53_e6cd_4e60_87d5_5a67b6f40b76.slice/crio-0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e WatchSource:0}: Error finding container 0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e: Status 404 returned error can't find the container with id 0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.927250 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbldp_openshift-multus(131f7f53-e6cd-4e60-87d5-5a67b6f40b76): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.928616 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podUID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" Feb 02 00:11:20 crc kubenswrapper[5108]: W0202 00:11:20.930952 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93334c92_cf5f_4978_b891_2b8e5ea35025.slice/crio-1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e WatchSource:0}: Error finding container 1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e: Status 404 returned error can't find the container with id 1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.934847 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.938259 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:20 crc kubenswrapper[5108]: E0202 00:11:20.940536 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968515 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:20 crc kubenswrapper[5108]: I0202 00:11:20.968552 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:20Z","lastTransitionTime":"2026-02-02T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.072477 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.140867 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141104 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141135 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141160 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.141352 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.141223445 +0000 UTC m=+81.416720415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.147558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.160914 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:21 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:21 crc kubenswrapper[5108]: clusters: Feb 02 00:11:21 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:21 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: contexts: Feb 02 00:11:21 crc kubenswrapper[5108]: - context: Feb 02 00:11:21 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:21 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:21 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:21 crc kubenswrapper[5108]: users: Feb 02 00:11:21 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: user: Feb 02 00:11:21 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: EOF Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.162147 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174285 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.174366 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242176 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242501 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.242455335 +0000 UTC m=+81.517952305 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242638 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242675 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.242727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242779 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.242741463 +0000 UTC m=+81.518238433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242916 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242919 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243071 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.24303809 +0000 UTC m=+81.518535060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.242942 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243211 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.243450 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.24340199 +0000 UTC m=+81.518898960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.276949 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277084 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.277112 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293443 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.293599 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.306330 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311093 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.311176 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.325900 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332382 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332457 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.332533 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.342288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"9893258dab7a033d522aebee422e4d3ac3767f3fa09f53c77a4ed6caa75683e5"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.343590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.343728 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.343822 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:22.343797859 +0000 UTC m=+81.619294789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.343982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"61e808d3ffdc264d45983a8def8fd8ab9b983bc91f4dc5058ee391798edad7f4"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.346426 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 02 00:11:21 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 02 00:11:21 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-q22wv_openshift-multus(24f8cedc-9b82-4ef7-a7db-4ce57803e0ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.346500 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348120 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348336 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-q22wv" podUID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.348522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"1f70210d957ec5ce7db7c62f748d782e0b8fc0f4431be452c3767c2bc1c0895e"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.348521 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.351342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"0aa5086fea2429e6fed52dc6dce891b95283b5b90be333f7067ae7a3bd80420e"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.351320 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.353442 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.354189 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.354212 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbldp_openshift-multus(131f7f53-e6cd-4e60-87d5-5a67b6f40b76): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.354540 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.355404 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podUID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.355943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdw92" event={"ID":"f5434f05-9acb-4d0c-a175-d5efc97194da","Type":"ContainerStarted","Data":"11177a9280a46b5ae3e32cd16fd55c985bd85a843c725f73bb7e0729cf24754b"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.362246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.363133 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -uo pipefail Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 02 00:11:21 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 02 00:11:21 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 02 00:11:21 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: while true; do Feb 02 00:11:21 crc kubenswrapper[5108]: declare -A svc_ips Feb 02 00:11:21 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 02 00:11:21 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 02 00:11:21 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 02 00:11:21 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 02 00:11:21 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 02 00:11:21 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 02 00:11:21 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 02 00:11:21 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 02 00:11:21 crc kubenswrapper[5108]: do Feb 02 00:11:21 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 02 00:11:21 crc kubenswrapper[5108]: break Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 02 00:11:21 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 02 00:11:21 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 02 00:11:21 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 02 00:11:21 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: continue Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Append resolver entries for services Feb 02 00:11:21 crc kubenswrapper[5108]: rc=0 Feb 02 00:11:21 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 02 00:11:21 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 02 00:11:21 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: continue Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 02 00:11:21 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 02 00:11:21 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 02 00:11:21 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait Feb 02 00:11:21 crc kubenswrapper[5108]: unset svc_ips Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xdw92_openshift-dns(f5434f05-9acb-4d0c-a175-d5efc97194da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.364196 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"7a2461c6a473f94ba1ea1904c2b0cd4abbd44d50e56c3ab93bba762c867a78ab"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.364343 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xdw92" podUID="f5434f05-9acb-4d0c-a175-d5efc97194da" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.365777 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 02 00:11:21 crc kubenswrapper[5108]: set -euo pipefail Feb 02 00:11:21 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 02 00:11:21 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 02 00:11:21 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 02 00:11:21 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 02 00:11:21 crc kubenswrapper[5108]: TS=$(date +%s) Feb 02 00:11:21 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 02 00:11:21 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: log_missing_certs(){ Feb 02 00:11:21 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 02 00:11:21 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 02 00:11:21 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: } Feb 02 00:11:21 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 02 00:11:21 crc kubenswrapper[5108]: log_missing_certs Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 5 Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 02 00:11:21 crc kubenswrapper[5108]: --logtostderr \ Feb 02 00:11:21 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.365961 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"5f555816d6ec189f7bd3d7e5ba213cdc54e4ba6984fd49cb3eb011639902fdde"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.366048 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.368995 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:21 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:21 crc kubenswrapper[5108]: clusters: Feb 02 00:11:21 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:21 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: contexts: Feb 02 00:11:21 crc kubenswrapper[5108]: - context: Feb 02 00:11:21 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:21 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:21 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:21 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:21 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:21 crc kubenswrapper[5108]: users: Feb 02 00:11:21 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:21 crc kubenswrapper[5108]: user: Feb 02 00:11:21 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:21 crc kubenswrapper[5108]: EOF Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.369824 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 02 00:11:21 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 02 00:11:21 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 02 00:11:21 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 02 00:11:21 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 02 00:11:21 crc kubenswrapper[5108]: exit 1 Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:21 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 02 00:11:21 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-multicast \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ccnbr_openshift-ovn-kubernetes(0298f7da-43a3-48a4-8e32-b772a82bd62d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370185 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370619 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.370998 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.370947 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6t6x" event={"ID":"ddd95e62-4b23-4887-b6e7-364a01924524","Type":"ContainerStarted","Data":"896942b9503dfb123e81fe12f3e839f49bd2881d35de050a50cfa0fc867bb9e6"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.371831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"9997fcb85f88b9cc0029d5e0b7da92d29fdfbfbe05e37cdd43cb8ba96499fdc5"} Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.371908 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.373065 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 02 00:11:21 crc kubenswrapper[5108]: while [ true ]; Feb 02 00:11:21 crc kubenswrapper[5108]: do Feb 02 00:11:21 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 02 00:11:21 crc kubenswrapper[5108]: echo $f Feb 02 00:11:21 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 02 00:11:21 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 02 00:11:21 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 02 00:11:21 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: else Feb 02 00:11:21 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 02 00:11:21 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 02 00:11:21 crc kubenswrapper[5108]: echo $d Feb 02 00:11:21 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 02 00:11:21 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 02 00:11:21 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 02 00:11:21 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 02 00:11:21 crc kubenswrapper[5108]: done Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8fbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-r6t6x_openshift-image-registry(ddd95e62-4b23-4887-b6e7-364a01924524): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.373985 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 02 00:11:21 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 02 00:11:21 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 02 00:11:21 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 02 00:11:21 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 02 00:11:21 crc kubenswrapper[5108]: ${ho_enable} \ Feb 02 00:11:21 crc kubenswrapper[5108]: --enable-interconnect \ Feb 02 00:11:21 crc kubenswrapper[5108]: --disable-approver \ Feb 02 00:11:21 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 02 00:11:21 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.374249 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-r6t6x" podUID="ddd95e62-4b23-4887-b6e7-364a01924524" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.376398 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:21 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 02 00:11:21 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 02 00:11:21 crc kubenswrapper[5108]: set -o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: source "/env/_master" Feb 02 00:11:21 crc kubenswrapper[5108]: set +o allexport Feb 02 00:11:21 crc kubenswrapper[5108]: fi Feb 02 00:11:21 crc kubenswrapper[5108]: Feb 02 00:11:21 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 02 00:11:21 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 02 00:11:21 crc kubenswrapper[5108]: --disable-webhook \ Feb 02 00:11:21 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 02 00:11:21 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 02 00:11:21 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:21 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.377569 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.383470 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.388947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389020 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.389105 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.394023 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.403867 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.404054 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.405528 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408100 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408152 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408193 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.408210 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.423718 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.434028 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.449682 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.458764 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.471429 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.485799 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.502668 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511414 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511435 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511465 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.511485 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.530101 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.543184 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.557944 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.567578 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.569379 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.569644 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.583813 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.593543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.595799 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.600906 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.606918 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.609575 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614248 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614302 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.614551 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.619890 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.622757 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.633036 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.645675 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.647362 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.653851 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.657215 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.668358 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.669004 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.669211 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.674981 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.675919 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.678770 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.680177 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.683930 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.686074 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.686288 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.688893 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.692788 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.697855 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.699483 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.703796 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.709696 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.710302 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.715949 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716815 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716858 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.716897 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.719166 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.722561 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.724064 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.731763 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.732689 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.736333 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.737937 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.738021 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.748601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.750771 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.753604 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.758935 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.763688 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.764511 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.771387 5108 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.771534 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.784801 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.784939 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.797106 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.797720 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.801403 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.805850 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.806765 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.807972 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.813551 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.814573 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.815859 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819439 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819507 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.819815 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.821801 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.834168 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.835557 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.836311 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.838482 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.839994 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.841795 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.843623 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.845995 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.847557 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.849159 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.864763 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866000 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.865996 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866016 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866119 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.866170 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866486 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866524 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:21 crc kubenswrapper[5108]: E0202 00:11:21.866385 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.875217 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.913600 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.923499 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:21Z","lastTransitionTime":"2026-02-02T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:21 crc kubenswrapper[5108]: I0202 00:11:21.954154 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.002971 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027183 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027202 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027263 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.027284 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.040130 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.095106 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.115415 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.130334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.130776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.131491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.132078 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.132431 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.156780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158593 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158669 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158687 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.158800 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.158771359 +0000 UTC m=+83.434268469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.163170 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.199536 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.236423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.236895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239149 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.238964 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239422 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.239905 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258361 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258588 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258676 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.258717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.258946 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.258985 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259007 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259074 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259040215 +0000 UTC m=+83.534537175 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259117 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259103726 +0000 UTC m=+83.534600686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259157 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259349 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.259301162 +0000 UTC m=+83.534798212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.259822 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.260046 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.26001701 +0000 UTC m=+83.535513980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.280026 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.308274 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.309475 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.309802 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.314983 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342906 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342922 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.342958 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.357095 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.359566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.359756 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: E0202 00:11:22.359831 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:24.359809343 +0000 UTC m=+83.635306273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.393302 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.435209 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445261 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445350 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445400 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445449 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.445499 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.472506 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.512284 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548061 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.548187 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.554547 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.594706 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.638817 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651117 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.651159 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.688549 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.714762 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754867 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754881 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.754972 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.755961 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.793774 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.834541 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857543 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.857664 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.874252 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.914962 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961118 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:22 crc kubenswrapper[5108]: I0202 00:11:22.961166 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:22Z","lastTransitionTime":"2026-02-02T00:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063942 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.063960 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.166623 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269877 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269943 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269962 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.269976 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373730 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.373947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.374036 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477484 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477537 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.477558 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557639 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.557901 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.557934 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558042 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558107 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:23 crc kubenswrapper[5108]: E0202 00:11:23.558134 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.582682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583092 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.583739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.584273 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.688933 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.689057 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791666 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791683 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791707 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.791722 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.894589 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.894876 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895045 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.895476 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.997811 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.998905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:23 crc kubenswrapper[5108]: I0202 00:11:23.999027 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:23Z","lastTransitionTime":"2026-02-02T00:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101089 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101154 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.101178 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.183325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183516 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183534 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183546 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.183602 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.183584128 +0000 UTC m=+87.459081058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203862 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.203905 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.284694 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.284821 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284898 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284912 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.28486822 +0000 UTC m=+87.560365180 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.284974 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.284952952 +0000 UTC m=+87.560449882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.285013 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.285082 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285214 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285287 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.28527537 +0000 UTC m=+87.560772300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285310 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285745 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285762 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.285924 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.285912497 +0000 UTC m=+87.561409427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306657 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.306806 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.385772 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.385981 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: E0202 00:11:24.386096 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:28.386069179 +0000 UTC m=+87.661566109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409751 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.409841 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513174 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.513184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614843 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.614873 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718111 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718207 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718247 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.718264 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821950 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.821999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.822022 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925368 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925444 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925469 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925503 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:24 crc kubenswrapper[5108]: I0202 00:11:24.925530 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:24Z","lastTransitionTime":"2026-02-02T00:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028185 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028318 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.028381 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.130918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131160 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.131185 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233666 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.233832 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336447 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336499 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336526 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.336538 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.438960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439032 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439077 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.439095 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542090 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542159 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.542199 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557348 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557459 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557522 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557683 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.557782 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:25 crc kubenswrapper[5108]: E0202 00:11:25.557877 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645401 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645415 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.645454 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747875 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747928 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747942 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.747974 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850471 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.850638 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:25 crc kubenswrapper[5108]: I0202 00:11:25.952749 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:25Z","lastTransitionTime":"2026-02-02T00:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054720 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054765 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.054778 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157302 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157377 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.157395 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260301 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260401 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260427 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.260445 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363258 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363277 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.363323 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466130 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.466186 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569310 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.569324 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671551 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.671629 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773834 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773883 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.773960 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876752 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.876846 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980758 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980848 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980868 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:26 crc kubenswrapper[5108]: I0202 00:11:26.980917 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:26Z","lastTransitionTime":"2026-02-02T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084386 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084482 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.084624 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.187891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.187983 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.188070 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290627 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.290688 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392541 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.392670 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495635 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495660 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.495680 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563215 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563167 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563328 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.563382 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563521 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563554 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:27 crc kubenswrapper[5108]: E0202 00:11:27.563628 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598754 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598783 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.598803 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701446 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.701491 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804607 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804668 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.804731 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.906986 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907082 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907135 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:27 crc kubenswrapper[5108]: I0202 00:11:27.907153 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:27Z","lastTransitionTime":"2026-02-02T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.009951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010212 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010303 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.010334 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114822 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114848 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114913 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.114942 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218455 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218526 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218575 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.218594 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.235798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236110 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236196 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236224 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.236433 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.236385658 +0000 UTC m=+95.511882638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322579 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.322603 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.337863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338005 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.337978299 +0000 UTC m=+95.613475229 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.338280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338425 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338498 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338548 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338515053 +0000 UTC m=+95.614012023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338659 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338622555 +0000 UTC m=+95.614119685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338823 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338865 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338887 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.338943 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.338929333 +0000 UTC m=+95.614426463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425700 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425816 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.425834 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.439264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.439438 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: E0202 00:11:28.439516 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:36.439496507 +0000 UTC m=+95.714993437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529460 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529562 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529583 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.529659 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632935 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.632999 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736221 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736412 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.736426 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839307 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839366 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.839414 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942470 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942512 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:28 crc kubenswrapper[5108]: I0202 00:11:28.942531 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:28Z","lastTransitionTime":"2026-02-02T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.046399 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.149932 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.149999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.150064 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253125 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253154 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.253173 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356349 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.356399 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459450 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459476 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459506 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.459526 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557144 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557386 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557445 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557600 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.557630 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557397 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.557830 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:29 crc kubenswrapper[5108]: E0202 00:11:29.558005 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562709 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.562735 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666067 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666159 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666185 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.666204 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768830 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.768914 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871291 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871400 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.871448 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974218 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974279 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:29 crc kubenswrapper[5108]: I0202 00:11:29.974290 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:29Z","lastTransitionTime":"2026-02-02T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076678 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076752 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076772 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076800 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.076820 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179787 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.179804 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282124 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282142 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282165 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.282184 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384429 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.384510 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487430 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.487479 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589931 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.589949 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693177 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693308 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693363 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.693389 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796660 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796701 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.796714 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.899830 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:30Z","lastTransitionTime":"2026-02-02T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:30 crc kubenswrapper[5108]: I0202 00:11:30.987341 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003347 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003444 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003464 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.003516 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106275 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106315 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.106340 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208717 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.208821 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311769 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311828 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311842 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311860 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.311874 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414653 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414820 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.414840 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517393 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517469 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517500 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.517519 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.556572 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.556624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.556797 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.556986 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.557143 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.557912 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.557995 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.558355 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.566960 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 02 00:11:31 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 02 00:11:31 crc kubenswrapper[5108]: apiVersion: v1 Feb 02 00:11:31 crc kubenswrapper[5108]: clusters: Feb 02 00:11:31 crc kubenswrapper[5108]: - cluster: Feb 02 00:11:31 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 02 00:11:31 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 02 00:11:31 crc kubenswrapper[5108]: name: default-cluster Feb 02 00:11:31 crc kubenswrapper[5108]: contexts: Feb 02 00:11:31 crc kubenswrapper[5108]: - context: Feb 02 00:11:31 crc kubenswrapper[5108]: cluster: default-cluster Feb 02 00:11:31 crc kubenswrapper[5108]: namespace: default Feb 02 00:11:31 crc kubenswrapper[5108]: user: default-auth Feb 02 00:11:31 crc kubenswrapper[5108]: name: default-context Feb 02 00:11:31 crc kubenswrapper[5108]: current-context: default-context Feb 02 00:11:31 crc kubenswrapper[5108]: kind: Config Feb 02 00:11:31 crc kubenswrapper[5108]: preferences: {} Feb 02 00:11:31 crc kubenswrapper[5108]: users: Feb 02 00:11:31 crc kubenswrapper[5108]: - name: default-auth Feb 02 00:11:31 crc kubenswrapper[5108]: user: Feb 02 00:11:31 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:31 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 02 00:11:31 crc kubenswrapper[5108]: EOF Feb 02 00:11:31 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-66k84_openshift-ovn-kubernetes(d0c5973e-49ea-41a0-87d5-c8e867ee8a66): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 02 00:11:31 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.569718 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.581604 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.598454 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.615751 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620184 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.620262 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.628766 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.642002 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.672534 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.684139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691814 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.691953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.692006 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.692026 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.696198 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.707684 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711865 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711887 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.711906 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.712139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.728122 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.728419 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.740065 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.750987 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.767297 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780699 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.780712 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.784323 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.795955 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800966 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.800979 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.804159 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.815339 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.816629 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.824996 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825079 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.825110 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.830610 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.836270 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e3a7b5ac-876b-4877-b87d-9cb708308d6e\\\",\\\"systemUUID\\\":\\\"e7aab70d-ffc3-4723-87e3-99e45b63c1a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: E0202 00:11:31.836448 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838581 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838734 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.838784 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.840541 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.851963 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.940950 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941047 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:31 crc kubenswrapper[5108]: I0202 00:11:31.941080 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:31Z","lastTransitionTime":"2026-02-02T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044257 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044272 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.044304 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146818 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.146959 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250298 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250345 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.250362 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353129 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353201 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353357 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.353378 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456531 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.456666 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.481223 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558884 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.558980 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.559330 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.561599 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-d74m7_openshift-machine-config-operator(93334c92-cf5f-4978-b891-2b8e5ea35025): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 02 00:11:32 crc kubenswrapper[5108]: E0202 00:11:32.562764 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660913 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.660992 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.661003 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763127 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.763398 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.785405 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865746 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865780 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.865811 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:32 crc kubenswrapper[5108]: I0202 00:11:32.967887 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:32Z","lastTransitionTime":"2026-02-02T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070926 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.070977 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173579 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173670 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.173681 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275809 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275878 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275903 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275932 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.275954 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378884 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378900 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.378910 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481652 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481680 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.481692 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556795 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556842 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.556968 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.556984 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557320 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.557502 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557902 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:33 crc kubenswrapper[5108]: E0202 00:11:33.557176 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583819 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.583861 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685931 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685975 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.685989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.686009 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.686020 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788708 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788724 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.788741 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.789032 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891214 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891278 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891287 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891304 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.891313 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994197 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994256 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994294 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:33 crc kubenswrapper[5108]: I0202 00:11:33.994307 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:33Z","lastTransitionTime":"2026-02-02T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097448 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.097515 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199979 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.199997 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.200009 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302481 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.302594 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.405936 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.406095 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.413790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.432351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.446560 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.462419 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.477497 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.488391 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.499587 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509351 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509360 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.509385 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.521665 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.538283 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.566837 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.608819 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622537 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622653 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.622685 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.629187 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.642351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.650087 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.660832 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.669461 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.682355 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.692989 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.708599 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.718644 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725344 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.725376 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827683 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827762 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.827773 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931699 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:34 crc kubenswrapper[5108]: I0202 00:11:34.931709 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:34Z","lastTransitionTime":"2026-02-02T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034744 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.034817 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136607 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136631 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.136640 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.239939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240003 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240020 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240040 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.240052 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354828 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354878 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.354916 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.421373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.436722 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.456328 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457871 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.457977 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.475415 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.503134 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.518751 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.537616 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.550262 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556665 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556669 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.556670 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.556900 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.556993 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.557464 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.557676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.557826 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.558731 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:35 crc kubenswrapper[5108]: E0202 00:11:35.558891 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561300 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561348 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561368 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561394 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.561414 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.578737 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.598277 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.614340 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.627155 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.639139 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.651649 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681817 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681926 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681956 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.681993 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.682020 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.695056 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.711869 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.730085 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.752508 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.773114 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784737 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784763 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784795 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.784965 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.789791 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.888937 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991830 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991948 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.991987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:35 crc kubenswrapper[5108]: I0202 00:11:35.992015 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:35Z","lastTransitionTime":"2026-02-02T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094737 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094821 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.094833 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197089 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197168 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.197206 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.243414 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243725 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243774 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243788 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.243880 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.243856779 +0000 UTC m=+111.519353709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299856 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299934 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.299976 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344406 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344606 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.344633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344749 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.344670829 +0000 UTC m=+111.620167769 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344811 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344831 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344847 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344854 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.344919 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345004 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.344973267 +0000 UTC m=+111.620470427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345033 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.345021628 +0000 UTC m=+111.620518568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.345085 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.345046829 +0000 UTC m=+111.620543799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403081 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403195 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.403208 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.426657 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3" exitCode=0 Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.426760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.429543 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdw92" event={"ID":"f5434f05-9acb-4d0c-a175-d5efc97194da","Type":"ContainerStarted","Data":"22e2a143e93948ce93981443bd6a4c85d0496e1b5144a763c304fc600225a6d1"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.431157 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6t6x" event={"ID":"ddd95e62-4b23-4887-b6e7-364a01924524","Type":"ContainerStarted","Data":"591f87cda3af3c29bd84b8ad7eb421f7243aa4ec7525512c379d920df7069119"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.434827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ab0f2b650398839efb319e4d55c18cc6d56404982fbd82913f7515041dfbbba9"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.434969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"879ce06a2cae6424fd3915643915f9404b42efdff9a788044d1d7b368c644cc4"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.445807 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.446040 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: E0202 00:11:36.446127 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:11:52.446106815 +0000 UTC m=+111.721603755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.449095 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.466357 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.477674 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.491111 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508051 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508160 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508190 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508213 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.508571 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.528255 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.551486 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.565025 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.585071 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.601820 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.610995 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.624158 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.646137 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.664388 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.678577 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.691986 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.705135 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713432 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713495 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.713530 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.749624 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.762259 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.774065 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.788980 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6045b615-dcb1-429a-b2f5-90320b248abd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T00:11:13Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0202 00:11:12.313632 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 00:11:12.313815 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0202 00:11:12.315198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3137978774/tls.crt::/tmp/serving-cert-3137978774/tls.key\\\\\\\"\\\\nI0202 00:11:13.680162 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 00:11:13.681688 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 00:11:13.681705 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 00:11:13.681740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 00:11:13.681746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 00:11:13.685680 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 00:11:13.685710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685715 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 00:11:13.685723 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 00:11:13.685726 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 00:11:13.685730 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 00:11:13.685733 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 00:11:13.685935 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 00:11:13.688258 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.805890 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e3c71e4-345e-44b7-88f3-6ff82a661fe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://45753d46eaf04a04d8232242cb5b9273b8087a461334236b89b406d7b3cd011f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://59b34c5b6d0dc5352c81d2258e481b0649a209e34f2df5e95ced5af3139958a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd500e236cb056e2c3836e10f2796884308111110209c3cc39f8d32626dc3cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815422 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815498 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.815552 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.821107 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.835561 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-q22wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q22wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.847943 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdw92" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5434f05-9acb-4d0c-a175-d5efc97194da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://22e2a143e93948ce93981443bd6a4c85d0496e1b5144a763c304fc600225a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2kbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdw92\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.860188 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93334c92-cf5f-4978-b891-2b8e5ea35025\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w26ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d74m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.880314 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c78ec217-e9a5-4a2a-90c9-290e82dc59b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a3846417f220c04d8c4d8222619750e9f1711c843cf090372c2cd864a76658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f599a55df72bfd44cf3f1d8d3562a8e4d66af1203173c06b888f689f8889f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2787cbb6c69730094c11e675bff609a6ea3e9fb7fcca8834d224b84a98007a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://410f66abce4b9bb2251494839297906a409eba0d6e4803f6c78e031282645780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://609c46cc2072c68b8031dea359861e95baceaafa6191bddce8204c8fea3a449b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3103593a08e66d511fea695e86e642fbe6c30f0768e71c4777d9b13641dda1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7e2d6a59225c5802f7452392f136e60431a4b0d4a124177f3b15a34d28e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91c97433b6354245f87f8b895c3c57e54d78d9b39eb859d64e0a375b318758a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.891601 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.904254 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.915648 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77b3aa8-8de9-4633-88e7-03f64903d146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc175b6cf361a922a81825ca08274354ef70efaa361c7f64e2acd23a6b2ec9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8184e340d9f457add3061252876659883abfb7ef7df2874927352d49c99afe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://626a3f19bc54ca1e2d7c1ff7d438eb749ad2dc33f3eb1b340bb1a429ee70f1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1b4f79d400cea547d40b99c29ca1549950e8fd6d3cab08b6ce59535e7fcd4d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919191 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.919325 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:36Z","lastTransitionTime":"2026-02-02T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.928320 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.937993 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6t6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddd95e62-4b23-4887-b6e7-364a01924524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://591f87cda3af3c29bd84b8ad7eb421f7243aa4ec7525512c379d920df7069119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:11:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8fbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6t6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.948160 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8bff5-9aab-4843-bf38-52064cc1df59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bcc037947e3b8a86e09f9948749aae495231ffe6cf88ff7098d867f94c3412d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T00:10:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad0b88925196f6bdddbe85872a675b8d1b170ad47be9e6ef82b1fbefb9f313a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:10:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:10:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:10:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.959022 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.971119 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:36 crc kubenswrapper[5108]: I0202 00:11:36.991785 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-66k84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.002462 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0298f7da-43a3-48a4-8e32-b772a82bd62d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ccnbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.014638 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"131f7f53-e6cd-4e60-87d5-5a67b6f40b76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15a05e291bb4e960bb3ece70c18e0ca2d192fd399050074e456ae8e0cd5c8dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T00:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T00:11:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbldp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022040 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022091 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022119 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.022130 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.024951 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26ppl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f77c18f0-131e-482e-8e09-602b39b0c163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T00:11:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxtcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T00:11:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26ppl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.124662 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228784 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.228989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.229260 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.229503 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.332668 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436100 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.436150 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.440823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"cb11ed559484d3cfe33ff0dee1351623d3707756e0b564e080a789719b6b19bd"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.443986 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.446658 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.446731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerStarted","Data":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.538994 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539074 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.539154 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559096 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559096 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.559812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.559849 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560129 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560542 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.560610 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:37 crc kubenswrapper[5108]: E0202 00:11:37.560733 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.637590 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.637560925 podStartE2EDuration="17.637560925s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.635556502 +0000 UTC m=+96.911053452" watchObservedRunningTime="2026-02-02 00:11:37.637560925 +0000 UTC m=+96.913057895" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641281 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641394 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641416 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641445 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.641469 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.691032 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q22wv" podStartSLOduration=73.69100334 podStartE2EDuration="1m13.69100334s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.673911137 +0000 UTC m=+96.949408087" watchObservedRunningTime="2026-02-02 00:11:37.69100334 +0000 UTC m=+96.966500270" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.691529 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xdw92" podStartSLOduration=73.691524734 podStartE2EDuration="1m13.691524734s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.690581999 +0000 UTC m=+96.966078959" watchObservedRunningTime="2026-02-02 00:11:37.691524734 +0000 UTC m=+96.967021664" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.742179 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.742156775 podStartE2EDuration="18.742156775s" podCreationTimestamp="2026-02-02 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.740660205 +0000 UTC m=+97.016157145" watchObservedRunningTime="2026-02-02 00:11:37.742156775 +0000 UTC m=+97.017653715" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.747825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748180 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.748466 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.813272 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.813219656 podStartE2EDuration="17.813219656s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.797749896 +0000 UTC m=+97.073246856" watchObservedRunningTime="2026-02-02 00:11:37.813219656 +0000 UTC m=+97.088716596" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.829323 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r6t6x" podStartSLOduration=73.829297352 podStartE2EDuration="1m13.829297352s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.829090677 +0000 UTC m=+97.104587657" watchObservedRunningTime="2026-02-02 00:11:37.829297352 +0000 UTC m=+97.104794292" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.842428 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.842398189 podStartE2EDuration="17.842398189s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.842018899 +0000 UTC m=+97.117515839" watchObservedRunningTime="2026-02-02 00:11:37.842398189 +0000 UTC m=+97.117895139" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.851545 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.904756 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podStartSLOduration=73.90471697 podStartE2EDuration="1m13.90471697s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:37.902829029 +0000 UTC m=+97.178325969" watchObservedRunningTime="2026-02-02 00:11:37.90471697 +0000 UTC m=+97.180213920" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954308 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:37 crc kubenswrapper[5108]: I0202 00:11:37.954381 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:37Z","lastTransitionTime":"2026-02-02T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.056937 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.056988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057019 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.057037 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.159925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160033 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160064 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.160084 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.265957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.267923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.268031 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.372980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.373065 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.453346 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef" exitCode=0 Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.453446 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"976b7c960dc45b34c63bbb69faf38320c43249f1704bfb4265d24cffa187c7ef"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.455904 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"f33b7fd2bdc58b68b66921615ba814d34a08b3b014ce87d7568901c5e8827ab6"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475390 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475553 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.475627 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578217 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.578268 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.680933 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.680991 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.681036 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.787791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788333 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788348 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.788382 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890755 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890814 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.890842 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993754 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993812 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:38 crc kubenswrapper[5108]: I0202 00:11:38.993880 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:38Z","lastTransitionTime":"2026-02-02T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096010 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096127 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.096174 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.198980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199066 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199130 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.199157 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302210 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.302267 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404877 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404920 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.404935 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.462188 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="0f008faf256631411f3e436dcbb8c373c8041ea92bcc52571fdec0ad03f45ff6" exitCode=0 Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.462256 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"0f008faf256631411f3e436dcbb8c373c8041ea92bcc52571fdec0ad03f45ff6"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506568 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.506660 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.566597 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.566805 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.567570 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.567753 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.567914 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.568046 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.568153 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:39 crc kubenswrapper[5108]: E0202 00:11:39.568334 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609378 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609436 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.609488 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713293 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.713337 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816105 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816181 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816203 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.816336 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921654 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:39 crc kubenswrapper[5108]: I0202 00:11:39.921685 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:39Z","lastTransitionTime":"2026-02-02T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.024869 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025278 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.025532 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.127609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.127901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128175 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.128327 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233640 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233688 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.233709 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.336150 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439832 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439937 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439965 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.439988 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.468883 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="c8a240fc2274e69a855a1db85ba3f09c991ead80a19c23dff1b81ff2455db9ea" exitCode=0 Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.468965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"c8a240fc2274e69a855a1db85ba3f09c991ead80a19c23dff1b81ff2455db9ea"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.542922 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644510 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644524 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.644534 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747187 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747237 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747268 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747283 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.747292 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849927 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849941 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.849974 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955284 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:40 crc kubenswrapper[5108]: I0202 00:11:40.955355 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:40Z","lastTransitionTime":"2026-02-02T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.061952 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062028 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.062091 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.165537 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276425 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276500 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.276519 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.293714 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396769 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.396856 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.478435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500283 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.500307 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.558801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559034 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559040 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559191 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559223 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.559284 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559381 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:41 crc kubenswrapper[5108]: E0202 00:11:41.559470 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606341 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606405 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.606436 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709819 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709883 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.709937 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813506 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813586 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813635 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.813653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916206 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916219 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916264 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:41 crc kubenswrapper[5108]: I0202 00:11:41.916279 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:41Z","lastTransitionTime":"2026-02-02T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019391 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.019421 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122550 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122564 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.122606 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150085 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.150203 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T00:11:42Z","lastTransitionTime":"2026-02-02T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.214023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g"] Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.361188 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.365682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.365752 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.366055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.366365 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423466 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423780 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.423979 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.487126 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a" exitCode=0 Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.487209 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"85fe5dfe261ea98fd7dad0878bb19fe9ffd26af63b2d211af07186d1d412a23a"} Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526645 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526799 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.526973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.527969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.528836 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.535145 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.544391 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.545256 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.565769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd16b6d-22dc-4e5a-a206-6b8eab5a0533-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8jt7g\" (UID: \"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: I0202 00:11:42.685779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" Feb 02 00:11:42 crc kubenswrapper[5108]: W0202 00:11:42.715044 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd16b6d_22dc_4e5a_a206_6b8eab5a0533.slice/crio-5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f WatchSource:0}: Error finding container 5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f: Status 404 returned error can't find the container with id 5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.492702 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" event={"ID":"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533","Type":"ContainerStarted","Data":"fdb42cb6daa4e93dd1ebd4524856070c6775adea89e74ffcbaf6faa2ea1f682d"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.492784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" event={"ID":"1cd16b6d-22dc-4e5a-a206-6b8eab5a0533","Type":"ContainerStarted","Data":"5c8bddf74e03cc03721e912aa458f33a0231b71ed8166f9f057257e5015e477f"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.498914 5108 generic.go:358] "Generic (PLEG): container finished" podID="131f7f53-e6cd-4e60-87d5-5a67b6f40b76" containerID="b92f2e96de651da46e45924d3aa1ff4c8a9c2f7877090b4baa708056e8b41f50" exitCode=0 Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.498994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerDied","Data":"b92f2e96de651da46e45924d3aa1ff4c8a9c2f7877090b4baa708056e8b41f50"} Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.540740 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8jt7g" podStartSLOduration=79.540706712 podStartE2EDuration="1m19.540706712s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:43.512483194 +0000 UTC m=+102.787980164" watchObservedRunningTime="2026-02-02 00:11:43.540706712 +0000 UTC m=+102.816203662" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557564 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:43 crc kubenswrapper[5108]: I0202 00:11:43.557748 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.557739 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.557971 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.558130 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:43 crc kubenswrapper[5108]: E0202 00:11:43.558284 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:44 crc kubenswrapper[5108]: I0202 00:11:44.522791 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbldp" event={"ID":"131f7f53-e6cd-4e60-87d5-5a67b6f40b76","Type":"ContainerStarted","Data":"469e6bc3fd7bc3862cd77ae516c5cd503e5c6cf68a260b443b2b257ab6fcd60f"} Feb 02 00:11:44 crc kubenswrapper[5108]: I0202 00:11:44.554466 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gbldp" podStartSLOduration=80.554442486 podStartE2EDuration="1m20.554442486s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:44.552768782 +0000 UTC m=+103.828265752" watchObservedRunningTime="2026-02-02 00:11:44.554442486 +0000 UTC m=+103.829939496" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.556966 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.557907 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557039 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558185 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557036 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558381 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:45 crc kubenswrapper[5108]: I0202 00:11:45.557108 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:45 crc kubenswrapper[5108]: E0202 00:11:45.558752 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.534404 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" exitCode=0 Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.534634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.562541 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562546 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.562875 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563211 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563324 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:47 crc kubenswrapper[5108]: I0202 00:11:47.563401 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:47 crc kubenswrapper[5108]: E0202 00:11:47.563486 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.542062 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"af976d2979a45794a11c98dae39890ecd1007c20716cbc8d4471c47d5d6c31ee"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.542739 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.548886 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549207 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549498 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.549630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.557900 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:11:48 crc kubenswrapper[5108]: E0202 00:11:48.558419 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 02 00:11:48 crc kubenswrapper[5108]: I0202 00:11:48.563927 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podStartSLOduration=84.563898509 podStartE2EDuration="1m24.563898509s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:48.56359118 +0000 UTC m=+107.839088120" watchObservedRunningTime="2026-02-02 00:11:48.563898509 +0000 UTC m=+107.839395449" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556621 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556700 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.557803 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556881 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:49 crc kubenswrapper[5108]: I0202 00:11:49.556728 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558000 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558239 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:49 crc kubenswrapper[5108]: E0202 00:11:49.558382 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.563963 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.564002 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.564069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.564767 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.565035 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.565497 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.566473 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:51 crc kubenswrapper[5108]: E0202 00:11:51.566703 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:51 crc kubenswrapper[5108]: I0202 00:11:51.566765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.254932 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255159 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255187 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255247 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.255325 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.255304978 +0000 UTC m=+143.530801908 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356586 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356545729 +0000 UTC m=+143.632042679 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356720 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356849 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.356753 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.356888 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356855038 +0000 UTC m=+143.632351978 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357015 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.356982691 +0000 UTC m=+143.632479771 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.357084 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357308 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357338 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357353 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.357420 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.357410702 +0000 UTC m=+143.632907632 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 00:11:52 crc kubenswrapper[5108]: I0202 00:11:52.458654 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.458861 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:52 crc kubenswrapper[5108]: E0202 00:11:52.458986 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs podName:f77c18f0-131e-482e-8e09-602b39b0c163 nodeName:}" failed. No retries permitted until 2026-02-02 00:12:24.458956011 +0000 UTC m=+143.734452971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs") pod "network-metrics-daemon-26ppl" (UID: "f77c18f0-131e-482e-8e09-602b39b0c163") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.564978 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564550 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.564529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565267 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565406 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:53 crc kubenswrapper[5108]: E0202 00:11:53.565508 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.580974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerStarted","Data":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581804 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581848 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.581867 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.620615 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.630155 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podStartSLOduration=89.630126294 podStartE2EDuration="1m29.630126294s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:11:53.629839127 +0000 UTC m=+112.905336127" watchObservedRunningTime="2026-02-02 00:11:53.630126294 +0000 UTC m=+112.905623254" Feb 02 00:11:53 crc kubenswrapper[5108]: I0202 00:11:53.633530 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.321470 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26ppl"] Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.323122 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.323330 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561504 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:55 crc kubenswrapper[5108]: I0202 00:11:55.561598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.562059 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.562100 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:55 crc kubenswrapper[5108]: E0202 00:11:55.561863 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557556 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.557751 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557804 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.557977 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558193 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558259 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:57 crc kubenswrapper[5108]: I0202 00:11:57.558521 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:57 crc kubenswrapper[5108]: E0202 00:11:57.558668 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557461 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557355 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.557346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.557626 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26ppl" podUID="f77c18f0-131e-482e-8e09-602b39b0c163" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.557782 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.558021 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 02 00:11:59 crc kubenswrapper[5108]: E0202 00:11:59.558097 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.656719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.657062 5108 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.731600 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.786979 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.787620 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.790982 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.792501 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.792693 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.793155 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.794137 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.807428 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.808284 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.812616 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813082 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813301 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.813464 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.814050 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.814322 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.819168 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.819622 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.834106 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.834752 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.835219 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.843794 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.854703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866343 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866732 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.866975 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.867459 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.867593 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871644 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871781 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871809 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.871856 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872061 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872130 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872152 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872891 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.872962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.873199 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.874542 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.879331 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.880047 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.884187 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.884959 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885108 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885335 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.885562 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.886758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887106 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887280 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.887427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.888051 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.895184 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.947970 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.948179 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.948418 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953114 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953291 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953653 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.953946 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954040 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954180 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954220 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954351 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954370 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954197 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954712 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954745 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.954888 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.955074 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973921 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973942 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.973983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974002 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974302 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974349 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974381 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974447 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974577 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974612 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974659 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974759 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974808 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974865 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.974907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975060 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975105 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-auth-proxy-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975126 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975113 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975110 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-config\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975701 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.975988 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976061 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976078 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976138 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.976420 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.977081 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.977746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.990067 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.994369 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.996912 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:11:59 crc kubenswrapper[5108]: I0202 00:11:59.999061 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-machine-approver-tls\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.002756 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"image-pruner-29499840-njc6g\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.004296 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-xtqwv\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.008486 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"controller-manager-65b6cccf98-fc5pz\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.008988 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ndv\" (UniqueName: \"kubernetes.io/projected/1f2e75fc-5a21-4f73-8f4c-050eb27c0601-kube-api-access-x7ndv\") pod \"machine-approver-54c688565-pw6lj\" (UID: \"1f2e75fc-5a21-4f73-8f4c-050eb27c0601\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.036342 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.036514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038930 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.039080 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.038874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.039655 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042499 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042566 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.042660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.051333 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.052047 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.052675 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053100 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053920 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.053834 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.054105 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.054468 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.057055 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.057364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.060724 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.061651 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.061798 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.072884 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.073182 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077155 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077249 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077333 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077375 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077396 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077439 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077477 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077496 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077712 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.077916 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078836 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078880 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079023 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079043 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079085 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.079107 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.080093 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-image-import-ca\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.078541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-audit\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.080285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081100 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8490096f-f230-4160-bb09-338c9fa9f7ca-audit-dir\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.081363 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-config\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.083471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8490096f-f230-4160-bb09-338c9fa9f7ca-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.084924 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-serving-cert\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-encryption-config\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087615 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8490096f-f230-4160-bb09-338c9fa9f7ca-etcd-client\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.087900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/688cb527-1d6f-4e22-9b14-4718201c8343-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.090147 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/688cb527-1d6f-4e22-9b14-4718201c8343-images\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097610 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf5j\" (UniqueName: \"kubernetes.io/projected/8490096f-f230-4160-bb09-338c9fa9f7ca-kube-api-access-gbf5j\") pod \"apiserver-9ddfb9f55-wbv6f\" (UID: \"8490096f-f230-4160-bb09-338c9fa9f7ca\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097869 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.097961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sqd\" (UniqueName: \"kubernetes.io/projected/688cb527-1d6f-4e22-9b14-4718201c8343-kube-api-access-49sqd\") pod \"machine-api-operator-755bb95488-q88tw\" (UID: \"688cb527-1d6f-4e22-9b14-4718201c8343\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.098005 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100564 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100799 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.100905 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.101177 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.122883 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.123082 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.125730 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.127739 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.127786 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128052 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128087 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128653 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.128799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.129542 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.130002 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131031 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131048 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131613 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131640 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.131918 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132150 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132260 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.132269 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.140591 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.142986 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.143914 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.145795 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.150185 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.150806 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.162735 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.162889 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.171284 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.176066 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180267 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180540 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180762 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180827 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180892 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180926 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180957 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.180987 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181025 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181198 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181418 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181446 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.181955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182443 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182498 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182651 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182825 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182864 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182905 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182929 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182953 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.182981 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183069 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183148 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-dir\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183169 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29780476-3e92-4559-af84-e97ab8689687-config\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183522 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-audit-policies\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183576 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.183615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.184999 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185041 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185108 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185192 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185211 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185250 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185297 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185362 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185385 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.185407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.186108 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-serving-ca\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.188604 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-encryption-config\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.189206 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7088c96-1022-40ff-a06c-f6c299744e3a-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.190441 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29780476-3e92-4559-af84-e97ab8689687-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.190474 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.192160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-serving-cert\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.192639 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-etcd-client\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.195892 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.210157 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.212813 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:00 crc kubenswrapper[5108]: W0202 00:12:00.213299 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2e75fc_5a21_4f73_8f4c_050eb27c0601.slice/crio-58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1 WatchSource:0}: Error finding container 58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1: Status 404 returned error can't find the container with id 58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1 Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.224023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.224316 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.241681 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.250785 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.271630 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276017 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276161 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.276335 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286367 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286409 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286461 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286811 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286882 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286938 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286964 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.286991 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287026 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287048 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287114 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-config\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.287820 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-service-ca\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288039 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-trusted-ca-bundle\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288817 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288881 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288919 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288954 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.288992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289134 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289157 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289174 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289254 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289275 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289373 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.289920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290087 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.290916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d2203371-fbdd-4110-9b33-39f278fbaa0d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291408 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-oauth-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291635 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d485c3-4de5-4d03-adf4-56f546c56674-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2203371-fbdd-4110-9b33-39f278fbaa0d-config\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.291961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292506 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.292607 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.293494 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294175 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294202 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d485c3-4de5-4d03-adf4-56f546c56674-serving-cert\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.294329 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.295766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-serving-cert\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.297337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.297842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.298128 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.299495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.300747 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.300942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301244 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2203371-fbdd-4110-9b33-39f278fbaa0d-serving-cert\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301539 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.301553 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.302867 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.303545 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-console-oauth-config\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.330955 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.343168 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.343610 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.352243 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.371099 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384149 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384359 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.384870 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dace4fd5-2d12-4c11-8252-9ac7426f870b-serving-cert\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390491 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390557 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390584 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390603 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390776 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390798 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.390847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.391387 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/59650315-e011-493f-bbf9-c20555ea6025-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.391640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2b96d2a0-be27-428e-8bfd-f78a09feb756-available-featuregates\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.392115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74feb297-18d1-4e3a-b077-779e202c89da-tmp-dir\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.399291 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.404689 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.411976 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.413096 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-trusted-ca\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.431320 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.440782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dace4fd5-2d12-4c11-8252-9ac7426f870b-config\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.449935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.470880 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: W0202 00:12:00.476939 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8490096f_f230_4160_bb09_338c9fa9f7ca.slice/crio-3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f WatchSource:0}: Error finding container 3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f: Status 404 returned error can't find the container with id 3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.490855 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.492264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.495005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b2e108-2c25-4942-b6bb-9bd186134bc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.501645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74feb297-18d1-4e3a-b077-779e202c89da-metrics-tls\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.514736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.530378 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.550678 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.570557 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571661 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571699 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.571855 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.576943 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b96d2a0-be27-428e-8bfd-f78a09feb756-serving-cert\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.582943 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.590589 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.610026 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.619378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59650315-e011-493f-bbf9-c20555ea6025-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.625738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"3dff47fd5622d76f9094ff593a6f9990ca9a7fc81f935d62943a1d2bd6f8491f"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.625783 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.626026 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.631049 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.666703 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnxh\" (UniqueName: \"kubernetes.io/projected/29780476-3e92-4559-af84-e97ab8689687-kube-api-access-8rnxh\") pod \"openshift-apiserver-operator-846cbfc458-zhjc8\" (UID: \"29780476-3e92-4559-af84-e97ab8689687\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.669777 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.672931 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59650315-e011-493f-bbf9-c20555ea6025-config\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.691281 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.727411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jv7\" (UniqueName: \"kubernetes.io/projected/8eb5f446-9d16-4ceb-9bb7-9424862cac0b-kube-api-access-d6jv7\") pod \"apiserver-8596bd845d-fn572\" (UID: \"8eb5f446-9d16-4ceb-9bb7-9424862cac0b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.744497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7ffn\" (UniqueName: \"kubernetes.io/projected/d7088c96-1022-40ff-a06c-f6c299744e3a-kube-api-access-m7ffn\") pod \"cluster-samples-operator-6b564684c8-vbckt\" (UID: \"d7088c96-1022-40ff-a06c-f6c299744e3a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.750180 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.771052 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.790477 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.811499 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.830513 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.850034 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.867695 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892354 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"58d1c4eb8712d64eccd81d5392605e13a13a3e2931e93bcc65d91e388b08dea1"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerStarted","Data":"ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerStarted","Data":"3158eaa8cced5445a37b12560efe834d0b215f5c202cf0145f728d9c8aaa5068"} Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.892443 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.893491 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.906857 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.939116 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ql89\" (UniqueName: \"kubernetes.io/projected/79d485c3-4de5-4d03-adf4-56f546c56674-kube-api-access-7ql89\") pod \"authentication-operator-7f5c659b84-mr9b9\" (UID: \"79d485c3-4de5-4d03-adf4-56f546c56674\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.953289 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5ss\" (UniqueName: \"kubernetes.io/projected/6d992c02-f6cc-4488-9108-a72c6c2f3dcf-kube-api-access-fw5ss\") pod \"console-64d44f6ddf-9pw49\" (UID: \"6d992c02-f6cc-4488-9108-a72c6c2f3dcf\") " pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.957089 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2203371-fbdd-4110-9b33-39f278fbaa0d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-klk4g\" (UID: \"d2203371-fbdd-4110-9b33-39f278fbaa0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.960889 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.976679 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.985179 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:00 crc kubenswrapper[5108]: I0202 00:12:00.993679 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4p9\" (UniqueName: \"kubernetes.io/projected/dace4fd5-2d12-4c11-8252-9ac7426f870b-kube-api-access-4r4p9\") pod \"console-operator-67c89758df-znc99\" (UID: \"dace4fd5-2d12-4c11-8252-9ac7426f870b\") " pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.002198 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scqbk\" (UniqueName: \"kubernetes.io/projected/a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26-kube-api-access-scqbk\") pod \"cluster-image-registry-operator-86c45576b9-g8d7h\" (UID: \"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.007985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"oauth-openshift-66458b6674-4lq2m\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.009701 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.014936 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.032978 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.050294 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.054096 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1b2e108-2c25-4942-b6bb-9bd186134bc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.057392 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.065030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.070986 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.077442 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.078192 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.090239 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.113747 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.152071 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerStarted","Data":"683d5e48d4bbd76223bfa55ebb9faedf8bd6693391a55afaa0790e34cd786995"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192516 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.192747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"1e9e5b2cca3ab853d62ce694bb95e422521c70191082faebdc45c803fbfe5db5"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.195046 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-4zf25"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.200334 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85k8\" (UniqueName: \"kubernetes.io/projected/74feb297-18d1-4e3a-b077-779e202c89da-kube-api-access-q85k8\") pod \"dns-operator-799b87ffcd-x5pzk\" (UID: \"74feb297-18d1-4e3a-b077-779e202c89da\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.216260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfp9\" (UniqueName: \"kubernetes.io/projected/2b96d2a0-be27-428e-8bfd-f78a09feb756-kube-api-access-rdfp9\") pod \"openshift-config-operator-5777786469-cvtnf\" (UID: \"2b96d2a0-be27-428e-8bfd-f78a09feb756\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.245569 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59650315-e011-493f-bbf9-c20555ea6025-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2k5pl\" (UID: \"59650315-e011-493f-bbf9-c20555ea6025\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.256895 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7cr\" (UniqueName: \"kubernetes.io/projected/64332d15-ee3f-4864-9165-3217a06b24c2-kube-api-access-hr7cr\") pod \"migrator-866fcbc849-m7wqk\" (UID: \"64332d15-ee3f-4864-9165-3217a06b24c2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.264782 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.270340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.287661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nghp5\" (UniqueName: \"kubernetes.io/projected/e1b2e108-2c25-4942-b6bb-9bd186134bc9-kube-api-access-nghp5\") pod \"machine-config-controller-f9cdd68f7-7v2ch\" (UID: \"e1b2e108-2c25-4942-b6bb-9bd186134bc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.293682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.293698 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.314571 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.330812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.334477 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.335593 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.335772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.351153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.369627 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.373421 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.379002 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.391259 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.397178 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.406168 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03927a55_b629_4f9c_be0f_3499aba5b90e.slice/crio-ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072 WatchSource:0}: Error finding container ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072: Status 404 returned error can't find the container with id ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.407140 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d485c3_4de5_4d03_adf4_56f546c56674.slice/crio-8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20 WatchSource:0}: Error finding container 8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20: Status 404 returned error can't find the container with id 8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.409776 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2203371_fbdd_4110_9b33_39f278fbaa0d.slice/crio-92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d WatchSource:0}: Error finding container 92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d: Status 404 returned error can't find the container with id 92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.410987 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.431482 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.450035 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.464153 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddace4fd5_2d12_4c11_8252_9ac7426f870b.slice/crio-1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522 WatchSource:0}: Error finding container 1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522: Status 404 returned error can't find the container with id 1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522 Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.471105 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.492866 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.524815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.533012 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.533614 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.544554 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.552331 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.576849 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.597691 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.611392 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.629111 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.629200 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.631875 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.636309 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.637332 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.637590 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.653673 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.692122 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.716663 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.730900 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.750923 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.758692 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b96d2a0_be27_428e_8bfd_f78a09feb756.slice/crio-c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9 WatchSource:0}: Error finding container c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9: Status 404 returned error can't find the container with id c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9 Feb 02 00:12:01 crc kubenswrapper[5108]: W0202 00:12:01.783697 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74feb297_18d1_4e3a_b077_779e202c89da.slice/crio-e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245 WatchSource:0}: Error finding container e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245: Status 404 returned error can't find the container with id e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245 Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.794809 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.811430 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.831900 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832012 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832161 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832202 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832436 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832581 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832690 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832865 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.832939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: E0202 00:12:01.834551 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.33452943 +0000 UTC m=+121.610026360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.851273 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.871410 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.891113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910036 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerStarted","Data":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910106 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.910318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.913826 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.913921 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.916341 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.931623 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.933909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934148 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934185 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934208 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934296 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: E0202 00:12:01.934329 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.434294931 +0000 UTC m=+121.709791881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934486 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934526 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934575 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934873 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934925 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.934984 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935016 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935136 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935166 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935195 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935244 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935270 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935322 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935350 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935391 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935458 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935481 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935611 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935645 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935685 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935710 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935784 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935911 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935940 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.935964 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.936015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.936916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.937918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.938220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.950110 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.969676 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 02 00:12:01 crc kubenswrapper[5108]: I0202 00:12:01.990130 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.010723 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.030796 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037660 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037685 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037739 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.037781 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038045 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038103 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038289 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/525b7b06-ae33-4a3b-bf12-139bff69a17c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.038696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.038923 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.538900782 +0000 UTC m=+121.814397942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039446 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039505 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039621 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.039938 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040180 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040250 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040294 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/525b7b06-ae33-4a3b-bf12-139bff69a17c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.040470 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c22e3c9-f940-436c-bcd4-0ae77d143061-config\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde8d9df-2e55-498d-acbe-7b5396cac5a7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041206 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041361 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041466 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041550 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c22e3c9-f940-436c-bcd4-0ae77d143061-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041643 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041802 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.041962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042088 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042131 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042257 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/45594040-ee30-4578-aa8c-a9e8ef858c06-tmp-dir\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.042714 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b79d203-f1c7-4523-9d97-51181cdb26d2-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.045799 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b79d203-f1c7-4523-9d97-51181cdb26d2-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.046480 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde8d9df-2e55-498d-acbe-7b5396cac5a7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.046566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-default-certificate\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.047119 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-stats-auth\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.047915 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.050846 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.050985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c22e3c9-f940-436c-bcd4-0ae77d143061-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.052379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.054373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/525b7b06-ae33-4a3b-bf12-139bff69a17c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.058815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031f8213-ba02-4add-9d14-c3a995a10fa9-service-ca-bundle\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.070977 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.090318 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.106664 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/031f8213-ba02-4add-9d14-c3a995a10fa9-metrics-certs\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.120729 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.131777 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.132552 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.134027 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.136938 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.137092 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146301 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146581 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146652 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.146730 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.646691756 +0000 UTC m=+121.922188686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.146998 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.147069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.148083 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97af9c02-0ff8-4146-9313-f3ecc17e1faa-tmpfs\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.150407 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.171185 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.184459 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.184884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.190749 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.211885 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.214157 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.214595 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.230634 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.233443 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.248928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.250971 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.750952987 +0000 UTC m=+122.026449917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.254146 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.256187 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.271014 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.286993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99916b4a-423b-4db6-a912-cc2ef585eab3-webhook-certs\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.290088 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.309410 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.309475 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.313146 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.320772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.330200 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.332798 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.332844 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.350939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.351077 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.851043977 +0000 UTC m=+122.126540907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.351668 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.351895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352053 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352126 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352314 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.352357 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.353069 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.85303615 +0000 UTC m=+122.128533150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365095 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerStarted","Data":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerStarted","Data":"622cac008e6f344601da7814328d32bf4251e371ecb3f167f409d3931a5c0323"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365536 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.365557 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.366799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.376706 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.411823 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.436558 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.446348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97af9c02-0ff8-4146-9313-f3ecc17e1faa-srv-cert\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.450579 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.453741 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454241 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454321 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454349 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.454892 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:02.954868317 +0000 UTC m=+122.230365247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.456410 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-tmpfs\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.454474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457526 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457623 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.457705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.470154 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.513597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.526844 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz7x\" (UniqueName: \"kubernetes.io/projected/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-kube-api-access-jqz7x\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: W0202 00:12:02.533068 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1b2e108_2c25_4942_b6bb_9bd186134bc9.slice/crio-b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57 WatchSource:0}: Error finding container b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57: Status 404 returned error can't find the container with id b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547734 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"e012f07d508f60af46efab18b336a6bf44e36c3b7a37cecd5f8ff132f8f02b90"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547797 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.547983 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.553745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swg6f\" (UniqueName: \"kubernetes.io/projected/07d89198-8b8e-4edc-96b8-05b6df5194f6-kube-api-access-swg6f\") pod \"downloads-747b44746d-cp5z2\" (UID: \"07d89198-8b8e-4edc-96b8-05b6df5194f6\") " pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.558848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.558990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559142 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559422 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559641 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.559724 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.560097 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.060080453 +0000 UTC m=+122.335577383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.560988 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-tmpfs\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.562942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.565886 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.571995 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.580954 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.590547 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.596160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-images\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601043 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-znc99" event={"ID":"dace4fd5-2d12-4c11-8252-9ac7426f870b","Type":"ContainerStarted","Data":"1d3f143097cfef2a1c6969b8cbb8abd202a99ba479f6984b71259a6306ade522"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601101 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601121 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601164 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" event={"ID":"29780476-3e92-4559-af84-e97ab8689687","Type":"ContainerStarted","Data":"edf2f5ae7b656f989a8d79219fb5cd964cf185d1dcb11ba1176c4c4a69ef2c39"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601178 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601193 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.601206 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.602665 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.610241 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.628746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.630962 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.644623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27d783b3-6f7d-4f4d-b054-225bfcb98fd5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tkjzb\" (UID: \"27d783b3-6f7d-4f4d-b054-225bfcb98fd5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.653857 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.664872 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.665120 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.165095013 +0000 UTC m=+122.440591943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.665554 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.666423 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.666640 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.667042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-serving-cert\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.667167 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.167148908 +0000 UTC m=+122.442645838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.669636 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.678721 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-config\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684921 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"5d866278182645a4b04b27cd412a4f630b1f2a02a19cbdf9183778c0f02dc03b"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerStarted","Data":"662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.684998 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" event={"ID":"79d485c3-4de5-4d03-adf4-56f546c56674","Type":"ContainerStarted","Data":"8a32a9cf40ec32feb5189d85552d666773264beabb1d0306431885517df2ea20"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.685018 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-824d7"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.685030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.711970 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"collect-profiles-29499840-qxdlz\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.730834 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.731623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdjm\" (UniqueName: \"kubernetes.io/projected/031f8213-ba02-4add-9d14-c3a995a10fa9-kube-api-access-bwdjm\") pod \"router-default-68cf44c8b8-4zf25\" (UID: \"031f8213-ba02-4add-9d14-c3a995a10fa9\") " pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.739427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-service-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.766798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.767211 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.767353 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.267328571 +0000 UTC m=+122.542825501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.767944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768004 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768027 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768252 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768731 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.768863 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.26882702 +0000 UTC m=+122.544323950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.768906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.769049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.769128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.772297 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.772784 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.787009 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nz59\" (UniqueName: \"kubernetes.io/projected/00c9b96f-70c1-47b2-ab2f-570c9911ecaf-kube-api-access-4nz59\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qmhlw\" (UID: \"00c9b96f-70c1-47b2-ab2f-570c9911ecaf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.788620 5108 generic.go:358] "Generic (PLEG): container finished" podID="8490096f-f230-4160-bb09-338c9fa9f7ca" containerID="806cbf335f4c9122a98af00277e8275b9c9c56fd35ff77e9c13a5c09fad858b6" exitCode=0 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.796010 5108 generic.go:358] "Generic (PLEG): container finished" podID="8eb5f446-9d16-4ceb-9bb7-9424862cac0b" containerID="4c6e7884627b6708f6b36fa0a5fd9c8c47024a9108bb856e7749da000b38a18d" exitCode=0 Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804592 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerStarted","Data":"ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804659 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" event={"ID":"d2203371-fbdd-4110-9b33-39f278fbaa0d","Type":"ContainerStarted","Data":"92f32b0fea3f83c877881cb678270e63baadc1131f9dd75326383f6a1362b01d"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804680 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804711 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"2ad20847710da3126f76cc87d6b9148544302a9e5e4ae90647a3e99524987c69"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804728 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804747 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804760 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9pw49" event={"ID":"6d992c02-f6cc-4488-9108-a72c6c2f3dcf","Type":"ContainerStarted","Data":"667462f5842f9336d060c680487d82e541368124a4626d982b5aaa54ddf6a9f0"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804792 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" event={"ID":"79d485c3-4de5-4d03-adf4-56f546c56674","Type":"ContainerStarted","Data":"9cd7b43085215338e0f3618f1075735e2c21684fc535b52c243eb7c5d342543a"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804809 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804829 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804844 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"8f2bc8a0b6e698f037e9383e20c6e4ee4f255ad3fc27bbd9bf4b9c0f9172e8f9"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804867 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804883 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" event={"ID":"d2203371-fbdd-4110-9b33-39f278fbaa0d","Type":"ContainerStarted","Data":"46d3e656b986b28a6c3ed6dd7019d7791902fca89c304d6aeaad28f1500fe047"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804896 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804911 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804930 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"b1eac1bace5f497c22016bfd4a514ab71202c44100f5732edf43602fd0921f57"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804948 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804966 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804981 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" event={"ID":"59650315-e011-493f-bbf9-c20555ea6025","Type":"ContainerStarted","Data":"3ee763e0c64f20bee57b387b2a75d1c42b8796be4c633ce066b463b9e2251fcc"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.804995 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerDied","Data":"806cbf335f4c9122a98af00277e8275b9c9c56fd35ff77e9c13a5c09fad858b6"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"930c0fc731362b74d47c1d69f55db286e0ea2297614d996d109a47a45e26cbeb"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" event={"ID":"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26","Type":"ContainerStarted","Data":"085eaeea2f3b71f73a742a92beea7fc7c5c168d52b65f8e21625d1f7a0060537"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805088 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9pw49" event={"ID":"6d992c02-f6cc-4488-9108-a72c6c2f3dcf","Type":"ContainerStarted","Data":"961554ebe1f6274cf27a8fe1773f7ae08ab641c306f9331db0a1ce83fcb584c2"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805101 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerDied","Data":"4c6e7884627b6708f6b36fa0a5fd9c8c47024a9108bb856e7749da000b38a18d"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805119 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-znc99" event={"ID":"dace4fd5-2d12-4c11-8252-9ac7426f870b","Type":"ContainerStarted","Data":"9a1c9821ac905b46c5b43b356e28319733ccb1106d884e83b7c61377841bc40b"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805134 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805150 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" event={"ID":"29780476-3e92-4559-af84-e97ab8689687","Type":"ContainerStarted","Data":"813601af7cf995b6fb2d0282609c818f051a53b8b25c7f974a7794a72d578fb2"} Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805163 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805179 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805193 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.805638 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.808845 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7nm\" (UniqueName: \"kubernetes.io/projected/4c22e3c9-f940-436c-bcd4-0ae77d143061-kube-api-access-5c7nm\") pod \"openshift-controller-manager-operator-686468bdd5-7hvdm\" (UID: \"4c22e3c9-f940-436c-bcd4-0ae77d143061\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.830181 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.831746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"marketplace-operator-547dbd544d-fmvtw\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.832774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-ca\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.869599 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870094 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.870376 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.370341218 +0000 UTC m=+122.645838318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870377 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9tm\" (UniqueName: \"kubernetes.io/projected/9b79d203-f1c7-4523-9d97-51181cdb26d2-kube-api-access-hm9tm\") pod \"ingress-operator-6b9cb4dbcf-9l4wv\" (UID: \"9b79d203-f1c7-4523-9d97-51181cdb26d2\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870922 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.870963 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871388 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871887 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-plugins-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.871977 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-socket-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872050 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-csi-data-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.872525 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.372516866 +0000 UTC m=+122.648013796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-registration-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.872674 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/917a1c8b-59d5-4acb-8cef-91979326a7d1-mountpoint-dir\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.886499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmvq\" (UniqueName: \"kubernetes.io/projected/fde8d9df-2e55-498d-acbe-7b5396cac5a7-kube-api-access-qjmvq\") pod \"kube-storage-version-migrator-operator-565b79b866-llk9m\" (UID: \"fde8d9df-2e55-498d-acbe-7b5396cac5a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.893781 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.906766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525b7b06-ae33-4a3b-bf12-139bff69a17c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-r7j49\" (UID: \"525b7b06-ae33-4a3b-bf12-139bff69a17c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.917100 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.921890 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.922933 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.938720 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zr6\" (UniqueName: \"kubernetes.io/projected/99916b4a-423b-4db6-a912-cc2ef585eab3-kube-api-access-z5zr6\") pod \"multus-admission-controller-69db94689b-wb8mw\" (UID: \"99916b4a-423b-4db6-a912-cc2ef585eab3\") " pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.956848 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.967843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45594040-ee30-4578-aa8c-a9e8ef858c06-etcd-client\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.973533 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974056 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974143 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.974273 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:02 crc kubenswrapper[5108]: E0202 00:12:02.975109 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.475081302 +0000 UTC m=+122.750578222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:02 crc kubenswrapper[5108]: I0202 00:12:02.991651 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:02.997523 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdgn\" (UniqueName: \"kubernetes.io/projected/97af9c02-0ff8-4146-9313-f3ecc17e1faa-kube-api-access-8hdgn\") pod \"olm-operator-5cdf44d969-mztxr\" (UID: \"97af9c02-0ff8-4146-9313-f3ecc17e1faa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:02.998818 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-apiservice-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.002445 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-webhook-cert\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.002681 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.011511 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.018729 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.034233 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.034385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-srv-cert\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.044788 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.044970 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045032 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045123 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.045244 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.055807 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.055884 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.056715 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.062525 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.063526 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-key\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.070362 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.071358 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-signing-cabundle\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075862 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075888 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075911 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.075994 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.076022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.076093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.076615 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.57660146 +0000 UTC m=+122.852098390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.090624 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.111913 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.118948 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285a46b_171e_4c8c_ba54_5ab062df76fc.slice/crio-791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d WatchSource:0}: Error finding container 791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d: Status 404 returned error can't find the container with id 791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.152246 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.165628 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.169853 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.172556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkv7s\" (UniqueName: \"kubernetes.io/projected/f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc-kube-api-access-pkv7s\") pod \"packageserver-7d4fc7d867-h2slm\" (UID: \"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178236 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.178875 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.179080 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.180627 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.680599134 +0000 UTC m=+122.956096064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.183531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f864fdce-3b6b-4ba2-9159-12c2d21f2601-tmp-dir\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.184655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.207451 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkfm\" (UniqueName: \"kubernetes.io/projected/6c411323-7b32-4e2b-a2b9-c6b63abeb1ea-kube-api-access-pdkfm\") pod \"catalog-operator-75ff9f647d-z28zc\" (UID: \"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.230887 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.231309 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jjj\" (UniqueName: \"kubernetes.io/projected/51f1951c-4ea1-4d6b-a965-5faf55ee8ed2-kube-api-access-22jjj\") pod \"service-ca-74545575db-4zcv5\" (UID: \"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2\") " pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.235547 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88c0487-caa2-44ee-a139-33b289b9fc2d-serving-cert\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.250816 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.256221 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.271538 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.280812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.281591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.781565428 +0000 UTC m=+123.057062358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.283243 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88c0487-caa2-44ee-a139-33b289b9fc2d-config\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284769 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284819 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" event={"ID":"1f2e75fc-5a21-4f73-8f4c-050eb27c0601","Type":"ContainerStarted","Data":"8b65ab51da705077a5b8b44a4f073f7d26a5c0631e765f8986cab314207c4b66"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284855 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284870 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284887 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284899 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"c619e269574a614e62448d9cf83c047a7af481334875a4db06f4bbca0e0f66c9"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284911 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"e96f8487c83ebffa4028aeab0a1061c0237488349f54c375ff6e0f49b7bf4245"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284923 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284936 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284947 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284958 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.284970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285472 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285645 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285693 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285707 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285735 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285755 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285770 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285786 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285843 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285860 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.285902 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.287551 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.294534 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.312479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.314770 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-4zcv5" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.344469 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.351542 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.381621 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.387586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.387988 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.887944385 +0000 UTC m=+123.163441315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388250 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388310 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.388536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.389022 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.889015274 +0000 UTC m=+123.164512204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.393696 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.408812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.442029 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.442344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gscz\" (UniqueName: \"kubernetes.io/projected/2c1108f2-209c-4d4c-affc-fe8fbfd27cca-kube-api-access-7gscz\") pod \"package-server-manager-77f986bd66-f55br\" (UID: \"2c1108f2-209c-4d4c-affc-fe8fbfd27cca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.448694 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.448837 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.450254 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.460556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-certs\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.475408 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.475427 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.481029 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ec9d7fc9-2385-408d-87f0-f2efafa41865-node-bootstrap-token\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493116 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493876 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.493913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.494058 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:03.994024784 +0000 UTC m=+123.269521704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.506494 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.530305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mg25\" (UniqueName: \"kubernetes.io/projected/917a1c8b-59d5-4acb-8cef-91979326a7d1-kube-api-access-2mg25\") pod \"csi-hostpathplugin-hnl48\" (UID: \"917a1c8b-59d5-4acb-8cef-91979326a7d1\") " pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.544481 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b7b06_ae33_4a3b_bf12_139bff69a17c.slice/crio-6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e WatchSource:0}: Error finding container 6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e: Status 404 returned error can't find the container with id 6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.550695 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdcw\" (UniqueName: \"kubernetes.io/projected/e88c0487-caa2-44ee-a139-33b289b9fc2d-kube-api-access-vsdcw\") pod \"service-ca-operator-5b9c976747-ft7zd\" (UID: \"e88c0487-caa2-44ee-a139-33b289b9fc2d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.553935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.575527 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.592042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f864fdce-3b6b-4ba2-9159-12c2d21f2601-metrics-tls\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.597836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.602295 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.602626 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.10261328 +0000 UTC m=+123.378110210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.611572 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.627404 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.628533 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651293 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651754 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651810 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29499840-njc6g"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651823 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651834 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651844 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wbv6f"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651853 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q88tw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651862 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fn572"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651871 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651880 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651890 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9pw49"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651899 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651910 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651919 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651928 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-znc99"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.651593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f864fdce-3b6b-4ba2-9159-12c2d21f2601-config-volume\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.658861 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbnpq\" (UniqueName: \"kubernetes.io/projected/ec9d7fc9-2385-408d-87f0-f2efafa41865-kube-api-access-vbnpq\") pod \"machine-config-server-824d7\" (UID: \"ec9d7fc9-2385-408d-87f0-f2efafa41865\") " pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.662870 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.662983 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cvtnf"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663020 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663034 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-x5pzk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663047 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663061 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663100 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663122 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663135 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663148 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663179 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663195 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663208 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.663219 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wb8mw"] Feb 02 00:12:03 crc kubenswrapper[5108]: W0202 00:12:03.689255 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99916b4a_423b_4db6_a912_cc2ef585eab3.slice/crio-29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9 WatchSource:0}: Error finding container 29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9: Status 404 returned error can't find the container with id 29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9 Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.695709 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.697483 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xv2\" (UniqueName: \"kubernetes.io/projected/f864fdce-3b6b-4ba2-9159-12c2d21f2601-kube-api-access-24xv2\") pod \"dns-default-q9bzk\" (UID: \"f864fdce-3b6b-4ba2-9159-12c2d21f2601\") " pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.698325 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.709694 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.709906 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.209866699 +0000 UTC m=+123.485363629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.710044 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.710608 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.711057 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.211049241 +0000 UTC m=+123.486546171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.728660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-824d7" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.741752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66ac186f-bc25-4f39-9d7b-394d9683b5c4-cert\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.756783 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.757057 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.790341 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzpm\" (UniqueName: \"kubernetes.io/projected/66ac186f-bc25-4f39-9d7b-394d9683b5c4-kube-api-access-phzpm\") pod \"ingress-canary-96tjr\" (UID: \"66ac186f-bc25-4f39-9d7b-394d9683b5c4\") " pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.791370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.802193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnj69\" (UniqueName: \"kubernetes.io/projected/45594040-ee30-4578-aa8c-a9e8ef858c06-kube-api-access-lnj69\") pod \"etcd-operator-69b85846b6-6jnxl\" (UID: \"45594040-ee30-4578-aa8c-a9e8ef858c06\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.811758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812439 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812700 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812773 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.812802 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.812956 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.312935389 +0000 UTC m=+123.588432319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.818891 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.832799 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" event={"ID":"00c9b96f-70c1-47b2-ab2f-570c9911ecaf","Type":"ContainerStarted","Data":"b0bd1b187bbbb754f27cfef12a7d5f1cbe1ee9daf4aa8ec0180b3caefdcfff4b"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.834054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" event={"ID":"031f8213-ba02-4add-9d14-c3a995a10fa9","Type":"ContainerStarted","Data":"ad774d57500bb9e0fc53f27ff35acb3a77561017af7111c7a796200ffd8f6057"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.840692 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerStarted","Data":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.842241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" event={"ID":"59650315-e011-493f-bbf9-c20555ea6025","Type":"ContainerStarted","Data":"dd8c1237f4b0cfcc2014cd3f28fdafcb2c7160092996df3277435c1949c25268"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.843872 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" event={"ID":"97af9c02-0ff8-4146-9313-f3ecc17e1faa","Type":"ContainerStarted","Data":"62c614918ea1ed767fd2378cc41eb8537204d35f7925c249c531c0a38e787b9c"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.845269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerStarted","Data":"791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.845660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.847926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" event={"ID":"a0606fb6-5a43-45d0-9bf0-d9afd6ff3b26","Type":"ContainerStarted","Data":"b1ce024c5139d6ed5da0f595f77dab589e4936242aebf05079321a106b535522"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.850944 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"29f19b6a3da71cd59a6a3c1958574f4a99b12428aafecd321c1e41ec850119a9"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.853254 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" event={"ID":"4c22e3c9-f940-436c-bcd4-0ae77d143061","Type":"ContainerStarted","Data":"da6479b86cb53a1cf69d2886a6f1e2e95b22fffbe0a1c7f6a8a87775b99f4e8f"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.854273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"65f94510318c4561e69d3f97ae53f9b1e6bbb466ebed5d4c3b077af1ba4d4a03"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.856134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" event={"ID":"525b7b06-ae33-4a3b-bf12-139bff69a17c","Type":"ContainerStarted","Data":"6a48d414bcfe9515708c203fe3df2d2dd06d62582c8454774bed04da6a3d575e"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.857222 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" event={"ID":"fde8d9df-2e55-498d-acbe-7b5396cac5a7","Type":"ContainerStarted","Data":"f12daf5a7b7ac4781a26b5f15ef59738c0f6b8cdc640c762e6bd96095474a7a0"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.859529 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"b13ed7e02312952627a8fe290f3f42545cea89e59d6401fe8e6ee3b38f6bedcd"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.864414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" event={"ID":"688cb527-1d6f-4e22-9b14-4718201c8343","Type":"ContainerStarted","Data":"97a2863c8e5866afb11a484a683b6301f14173c4c8442a743c64cb4d5adb897a"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.867133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627"} Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.869799 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.907025 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-96tjr" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915534 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915606 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.915688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: E0202 00:12:03.916841 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.41682531 +0000 UTC m=+123.692322240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917125 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.917748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.932151 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.937799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953280 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-xtqwv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953359 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953726 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.953776 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.954223 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.956412 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm"] Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.969810 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-4lq2m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.969873 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Feb 02 00:12:03 crc kubenswrapper[5108]: I0202 00:12:03.973847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"cni-sysctl-allowlist-ds-ng2x6\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.023573 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.024857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.025492 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.525466647 +0000 UTC m=+123.800963577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.026631 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.029068 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.529052222 +0000 UTC m=+123.804549152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.099512 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-cp5z2"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.129338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.129594 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.629551063 +0000 UTC m=+123.905047993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.129860 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.131165 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.630306283 +0000 UTC m=+123.905803213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.149602 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.182170 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9bzk"] Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.222917 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf864fdce_3b6b_4ba2_9159_12c2d21f2601.slice/crio-7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573 WatchSource:0}: Error finding container 7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573: Status 404 returned error can't find the container with id 7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.230589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.231120 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.731100992 +0000 UTC m=+124.006597922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.231691 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-4zcv5"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.246671 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.253871 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.257394 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hnl48"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.258900 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fc5pz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.258956 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.295222 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917a1c8b_59d5_4acb_8cef_91979326a7d1.slice/crio-b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11 WatchSource:0}: Error finding container b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11: Status 404 returned error can't find the container with id b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.315573 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-96tjr"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.328348 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl"] Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.333762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.334194 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.834175711 +0000 UTC m=+124.109672641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.352995 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-znc99" podStartSLOduration=100.352965289 podStartE2EDuration="1m40.352965289s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.350054102 +0000 UTC m=+123.625551052" watchObservedRunningTime="2026-02-02 00:12:04.352965289 +0000 UTC m=+123.628462219" Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.355007 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45594040_ee30_4578_aa8c_a9e8ef858c06.slice/crio-8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e WatchSource:0}: Error finding container 8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e: Status 404 returned error can't find the container with id 8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e Feb 02 00:12:04 crc kubenswrapper[5108]: W0202 00:12:04.373066 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ac186f_bc25_4f39_9d7b_394d9683b5c4.slice/crio-d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81 WatchSource:0}: Error finding container d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81: Status 404 returned error can't find the container with id d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81 Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.383925 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pw6lj" podStartSLOduration=100.383902929 podStartE2EDuration="1m40.383902929s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.382494251 +0000 UTC m=+123.657991191" watchObservedRunningTime="2026-02-02 00:12:04.383902929 +0000 UTC m=+123.659399859" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.436173 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.438031 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:04.93798765 +0000 UTC m=+124.213484700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.470160 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-9pw49" podStartSLOduration=100.469992018 podStartE2EDuration="1m40.469992018s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.424957255 +0000 UTC m=+123.700454195" watchObservedRunningTime="2026-02-02 00:12:04.469992018 +0000 UTC m=+123.745488948" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.470829 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-klk4g" podStartSLOduration=100.47082236 podStartE2EDuration="1m40.47082236s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.46668086 +0000 UTC m=+123.742177810" watchObservedRunningTime="2026-02-02 00:12:04.47082236 +0000 UTC m=+123.746319310" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.538600 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.539123 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.039101568 +0000 UTC m=+124.314598498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.602267 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-q88tw" podStartSLOduration=100.60221666 podStartE2EDuration="1m40.60221666s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.600182755 +0000 UTC m=+123.875679705" watchObservedRunningTime="2026-02-02 00:12:04.60221666 +0000 UTC m=+123.877713590" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.642422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.643513 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.143490812 +0000 UTC m=+124.418987732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.669453 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podStartSLOduration=100.669424579 podStartE2EDuration="1m40.669424579s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.669367428 +0000 UTC m=+123.944864368" watchObservedRunningTime="2026-02-02 00:12:04.669424579 +0000 UTC m=+123.944921509" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.671614 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mr9b9" podStartSLOduration=100.671603996 podStartE2EDuration="1m40.671603996s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:04.643809491 +0000 UTC m=+123.919306421" watchObservedRunningTime="2026-02-02 00:12:04.671603996 +0000 UTC m=+123.947100926" Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.745280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.746095 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.246075519 +0000 UTC m=+124.521572449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.847853 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.848284 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.348259315 +0000 UTC m=+124.623756255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.952280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:04 crc kubenswrapper[5108]: E0202 00:12:04.952900 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.452885125 +0000 UTC m=+124.728382055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.967437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" event={"ID":"031f8213-ba02-4add-9d14-c3a995a10fa9","Type":"ContainerStarted","Data":"741a5e5e4e18e911dfdf2b5e5840f16e6e43ba4ef72fb2c29fc2eb7ff1366738"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.975704 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"b79cbc0218e66151d7be64102ab45349368b39bd4198715de5bc685403d11b11"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.986508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-824d7" event={"ID":"ec9d7fc9-2385-408d-87f0-f2efafa41865","Type":"ContainerStarted","Data":"78c58d4a935e815abdd2e20984f52c1eaa78f43fae88002c7ebb39a86e404bae"} Feb 02 00:12:04 crc kubenswrapper[5108]: I0202 00:12:04.999122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"1450de438627cfa7f452b819a62d30550c58c5bf1ace61b9ff8d1a16c6e3b0fd"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.000003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerStarted","Data":"b7ccd63409a2599caa2a1d6a430c1e67af5f138dd3ea1e54d57df99b1d6cd73a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.045789 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" event={"ID":"8eb5f446-9d16-4ceb-9bb7-9424862cac0b","Type":"ContainerStarted","Data":"62d317867d108f124247eb8b10471272b2750ecc456cabdbefea82582a812a80"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.050268 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" event={"ID":"45594040-ee30-4578-aa8c-a9e8ef858c06","Type":"ContainerStarted","Data":"8ebdc02e0d431e12bc244bb0960fe851c5d91116385d0f23d9ad0a69c4cbfb2e"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.055125 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.056001 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.555967335 +0000 UTC m=+124.831464265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.078802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"0d746d8307495c32c04d459e3e2b91eee5fe17d31030d4b3c91e36e38c6c3719"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.093456 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"7a39e1408001c53856587460b4d183f2cf618151452c8a7a0807f54727156f95"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.135967 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" event={"ID":"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc","Type":"ContainerStarted","Data":"1aa404549b640839622a136e00b6e6737a73ccef583bff3181d7596c2ec8172a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.139553 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" event={"ID":"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea","Type":"ContainerStarted","Data":"a690ca3b87e2acc517b911a3d4d89655c668e5f49e99639f67dc29c1433087c2"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.141782 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podStartSLOduration=101.141765576 podStartE2EDuration="1m41.141765576s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.117454783 +0000 UTC m=+124.392951723" watchObservedRunningTime="2026-02-02 00:12:05.141765576 +0000 UTC m=+124.417262506" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147249 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147324 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.147739 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.157508 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.158125 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.658100709 +0000 UTC m=+124.933597639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.177786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"7b205ac45f41d5119940bc7240d1e8443f3a97c3700d1a88f8136ad1ebb839b9"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.247555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerStarted","Data":"f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.259338 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"35313905bb44ab9622887349e6e479da86c5011d92c1de20652791877e17021c"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.261896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.262199 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.762125154 +0000 UTC m=+125.037622084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.262741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.265748 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.765714489 +0000 UTC m=+125.041211419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.285806 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" event={"ID":"d7088c96-1022-40ff-a06c-f6c299744e3a","Type":"ContainerStarted","Data":"dde296639123c62a01bda198e41a2bd13f137ade7edb20b694d143a8922fecc1"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.288550 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" event={"ID":"e88c0487-caa2-44ee-a139-33b289b9fc2d","Type":"ContainerStarted","Data":"438263aeffcc2b8c337156661ccdd1797999eed07f6abc5078bb6dbb25881e45"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.290988 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"a91f2539c3eeb2902b7397333b66a15d7ebbbfe0ac5d8d5309bc7b7fcfb4537b"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.292688 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"7b6f5012e8545a6b7e326c4421cb54a0b6bb10953eb043b7928fc48371d20573"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.295726 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"157a6d6b4750cd8ba0d89b89e59f900aebf3db15d8fccfe93a51655962608c6d"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.298342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"5911c7cd1065babf88a5cddc507d2c4086da750449615ffbe4c0743188f2ef3a"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.300336 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29499840-njc6g" podStartSLOduration=101.300322486 podStartE2EDuration="1m41.300322486s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.299702059 +0000 UTC m=+124.575198999" watchObservedRunningTime="2026-02-02 00:12:05.300322486 +0000 UTC m=+124.575819416" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.304868 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46268: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.328723 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-96tjr" event={"ID":"66ac186f-bc25-4f39-9d7b-394d9683b5c4","Type":"ContainerStarted","Data":"d338299e0a43d133d38e00f771194afb0bbe5cbc1ea6345a676cc0e14d25ce81"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.332922 5108 generic.go:358] "Generic (PLEG): container finished" podID="2b96d2a0-be27-428e-8bfd-f78a09feb756" containerID="27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627" exitCode=0 Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.333021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerDied","Data":"27aadd57983610ac0f185271929402ea50f3644923e9bd626982607ee695c627"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.336022 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-4zcv5" event={"ID":"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2","Type":"ContainerStarted","Data":"c7422d62d76a89e9d61974b53e17891502d757d0a2000d16d9c1867ba87f128f"} Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.344027 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zhjc8" podStartSLOduration=101.344003323 podStartE2EDuration="1m41.344003323s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.342848981 +0000 UTC m=+124.618345931" watchObservedRunningTime="2026-02-02 00:12:05.344003323 +0000 UTC m=+124.619500263" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.372140 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.374424 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.874398757 +0000 UTC m=+125.149895697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.382941 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46274: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.474511 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.476801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:05.976779448 +0000 UTC m=+125.252276378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.477936 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46278: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.544095 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-4lq2m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.544541 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.564100 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=45.56407492 podStartE2EDuration="45.56407492s" podCreationTimestamp="2026-02-02 00:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.563243908 +0000 UTC m=+124.838740848" watchObservedRunningTime="2026-02-02 00:12:05.56407492 +0000 UTC m=+124.839571840" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.575857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.579693 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.079664733 +0000 UTC m=+125.355161663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.587980 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46282: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.611571 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.617243 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podStartSLOduration=101.617204707 podStartE2EDuration="1m41.617204707s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.608590939 +0000 UTC m=+124.884087879" watchObservedRunningTime="2026-02-02 00:12:05.617204707 +0000 UTC m=+124.892701637" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.681510 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46294: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.683033 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.683493 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.183476352 +0000 UTC m=+125.458973282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.707912 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-znc99 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.707986 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-znc99" podUID="dace4fd5-2d12-4c11-8252-9ac7426f870b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.784059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.784533 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.284496917 +0000 UTC m=+125.559993847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.784895 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.787016 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.286994383 +0000 UTC m=+125.562491443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.803895 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46298: no serving certificate available for the kubelet" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.806646 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" podStartSLOduration=101.806620103 podStartE2EDuration="1m41.806620103s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.753451715 +0000 UTC m=+125.028948655" watchObservedRunningTime="2026-02-02 00:12:05.806620103 +0000 UTC m=+125.082117033" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.807370 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vbckt" podStartSLOduration=101.807362782 podStartE2EDuration="1m41.807362782s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.80313611 +0000 UTC m=+125.078633050" watchObservedRunningTime="2026-02-02 00:12:05.807362782 +0000 UTC m=+125.082859712" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.827664 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" podStartSLOduration=101.827647539 podStartE2EDuration="1m41.827647539s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.825677367 +0000 UTC m=+125.101174317" watchObservedRunningTime="2026-02-02 00:12:05.827647539 +0000 UTC m=+125.103144469" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.870209 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.870738 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.874218 5108 patch_prober.go:28] interesting pod/apiserver-8596bd845d-fn572 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.40:8443/livez\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.874288 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" podUID="8eb5f446-9d16-4ceb-9bb7-9424862cac0b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.40:8443/livez\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.886780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.887023 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.38697835 +0000 UTC m=+125.662475280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.888022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.894830 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.895763 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.395742892 +0000 UTC m=+125.671239822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901741 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901826 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.901899 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.907471 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podStartSLOduration=101.907449823 podStartE2EDuration="1m41.907449823s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.869723303 +0000 UTC m=+125.145220253" watchObservedRunningTime="2026-02-02 00:12:05.907449823 +0000 UTC m=+125.182946743" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.947071 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g8d7h" podStartSLOduration=101.947050451 podStartE2EDuration="1m41.947050451s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.909370453 +0000 UTC m=+125.184867393" watchObservedRunningTime="2026-02-02 00:12:05.947050451 +0000 UTC m=+125.222547381" Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.989833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.990083 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2k5pl" podStartSLOduration=101.99006325 podStartE2EDuration="1m41.99006325s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.949971449 +0000 UTC m=+125.225468389" watchObservedRunningTime="2026-02-02 00:12:05.99006325 +0000 UTC m=+125.265560180" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.990581 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.490559233 +0000 UTC m=+125.766056163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:05 crc kubenswrapper[5108]: I0202 00:12:05.991027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:05 crc kubenswrapper[5108]: E0202 00:12:05.991530 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.491522149 +0000 UTC m=+125.767019079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.026774 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46312: no serving certificate available for the kubelet" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.028907 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podStartSLOduration=102.028877878 podStartE2EDuration="1m42.028877878s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:05.994697883 +0000 UTC m=+125.270194813" watchObservedRunningTime="2026-02-02 00:12:06.028877878 +0000 UTC m=+125.304374808" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.092396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.092838 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.592813611 +0000 UTC m=+125.868310541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.194243 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.194713 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.694692209 +0000 UTC m=+125.970189139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.296566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.296877 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.796841964 +0000 UTC m=+126.072338894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.297258 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.297659 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.797639795 +0000 UTC m=+126.073136795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.386666 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-96tjr" event={"ID":"66ac186f-bc25-4f39-9d7b-394d9683b5c4","Type":"ContainerStarted","Data":"710f09f87b57c061ef933ae0ed00cf0c1ff29fc614b75e2305f43b0293a4e770"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.401262 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.401439 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.901412933 +0000 UTC m=+126.176909863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.403512 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.404423 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:06.904413102 +0000 UTC m=+126.179910032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.415542 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46326: no serving certificate available for the kubelet" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.462299 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" event={"ID":"2b96d2a0-be27-428e-8bfd-f78a09feb756","Type":"ContainerStarted","Data":"0ab98fc00cb1c3500402e04faf4806bbbff16e8d22e3529c3633a861ce522222"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.463359 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.484347 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-4zcv5" event={"ID":"51f1951c-4ea1-4d6b-a965-5faf55ee8ed2","Type":"ContainerStarted","Data":"72c18e1195a5618901ea2deb8ab5d9bb93c1bf64d972a0f52ea04a01a867f558"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.498704 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" podStartSLOduration=102.498680519 podStartE2EDuration="1m42.498680519s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.496973293 +0000 UTC m=+125.772470253" watchObservedRunningTime="2026-02-02 00:12:06.498680519 +0000 UTC m=+125.774177449" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.502151 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-824d7" event={"ID":"ec9d7fc9-2385-408d-87f0-f2efafa41865","Type":"ContainerStarted","Data":"b7f60eaf52b9ef737f604bec27ca8c5d4bbceeb1fdea44d068cd1fb672e28543"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.505013 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-96tjr" podStartSLOduration=7.504978595 podStartE2EDuration="7.504978595s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.413603265 +0000 UTC m=+125.689100205" watchObservedRunningTime="2026-02-02 00:12:06.504978595 +0000 UTC m=+125.780475525" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.508609 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.510257 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.010215984 +0000 UTC m=+126.285713064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.526781 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" event={"ID":"e1b2e108-2c25-4942-b6bb-9bd186134bc9","Type":"ContainerStarted","Data":"e3e212c1a907b06a08c0874a4a7e782b1cd96348ead4ad845896371accc2b9fc"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.536987 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-4zcv5" podStartSLOduration=102.536951021 podStartE2EDuration="1m42.536951021s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.534297101 +0000 UTC m=+125.809794041" watchObservedRunningTime="2026-02-02 00:12:06.536951021 +0000 UTC m=+125.812447941" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.548759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" event={"ID":"97af9c02-0ff8-4146-9313-f3ecc17e1faa","Type":"ContainerStarted","Data":"c52202087275d9c392ee71614a4bdc7280f0da97e4aab336cd55eefc8f9f9cce"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.549752 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.559611 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-824d7" podStartSLOduration=7.55957679 podStartE2EDuration="7.55957679s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.556050987 +0000 UTC m=+125.831547927" watchObservedRunningTime="2026-02-02 00:12:06.55957679 +0000 UTC m=+125.835073720" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.560106 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" event={"ID":"4c22e3c9-f940-436c-bcd4-0ae77d143061","Type":"ContainerStarted","Data":"3d130c51810ca80c8780d85b3a1c6ab4108d688cf06fe845ba58d55f74cd48e4"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.569125 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-mztxr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.569295 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" podUID="97af9c02-0ff8-4146-9313-f3ecc17e1faa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.572971 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" event={"ID":"fde8d9df-2e55-498d-acbe-7b5396cac5a7","Type":"ContainerStarted","Data":"324379fbc8b1e9fd64f4683e4a6f6d22089fc5a80820695f409c29698e844409"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.597744 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.598883 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.598918 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.616918 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.116897939 +0000 UTC m=+126.392394869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.613990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.631387 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7v2ch" podStartSLOduration=102.631359952 podStartE2EDuration="1m42.631359952s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.596035146 +0000 UTC m=+125.871532086" watchObservedRunningTime="2026-02-02 00:12:06.631359952 +0000 UTC m=+125.906856882" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.644027 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.645273 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.693850 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.693969 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.694353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" event={"ID":"00c9b96f-70c1-47b2-ab2f-570c9911ecaf","Type":"ContainerStarted","Data":"1bc48f23bfa642442e677ada079579d82166f03d7ac885c09d39584358fdd49a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.685205 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7hvdm" podStartSLOduration=102.685168006 podStartE2EDuration="1m42.685168006s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.639689562 +0000 UTC m=+125.915186502" watchObservedRunningTime="2026-02-02 00:12:06.685168006 +0000 UTC m=+125.960664936" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.731860 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" podStartSLOduration=102.731842553 podStartE2EDuration="1m42.731842553s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.70870311 +0000 UTC m=+125.984200040" watchObservedRunningTime="2026-02-02 00:12:06.731842553 +0000 UTC m=+126.007339483" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.736660 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.738054 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.238030626 +0000 UTC m=+126.513527556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.740657 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-llk9m" podStartSLOduration=102.740633005 podStartE2EDuration="1m42.740633005s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.730482146 +0000 UTC m=+126.005979086" watchObservedRunningTime="2026-02-02 00:12:06.740633005 +0000 UTC m=+126.016129935" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.758774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" event={"ID":"f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc","Type":"ContainerStarted","Data":"a2a39167d6d7e6c0a2990e61e06142b9462e1998d955efeb9b8ebde09a404a54"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.760188 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.767866 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-h2slm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.767914 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podUID="f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.772476 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-cp5z2" podStartSLOduration=102.772464238 podStartE2EDuration="1m42.772464238s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.770858646 +0000 UTC m=+126.046355596" watchObservedRunningTime="2026-02-02 00:12:06.772464238 +0000 UTC m=+126.047961168" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.796650 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" event={"ID":"6c411323-7b32-4e2b-a2b9-c6b63abeb1ea","Type":"ContainerStarted","Data":"9b218b76fc3cfb3ac69f22ca94617bf588dd68acb2fedc57c3137ca671997ebf"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.797618 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802378 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-z28zc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802430 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podUID="6c411323-7b32-4e2b-a2b9-c6b63abeb1ea" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.802956 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qmhlw" podStartSLOduration=102.802937715 podStartE2EDuration="1m42.802937715s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.800702286 +0000 UTC m=+126.076199226" watchObservedRunningTime="2026-02-02 00:12:06.802937715 +0000 UTC m=+126.078434645" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.821107 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" event={"ID":"64332d15-ee3f-4864-9165-3217a06b24c2","Type":"ContainerStarted","Data":"f746cfa1b226d194076a81bc09280df7d2ca9bc3bdc50fc530e2f5cafd0ed8cd"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.834162 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" event={"ID":"525b7b06-ae33-4a3b-bf12-139bff69a17c","Type":"ContainerStarted","Data":"4464db89cbe5f99d96c0c05963685a847dc480eba93dd746fa39e3752e5fafdb"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.834172 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podStartSLOduration=102.834151581 podStartE2EDuration="1m42.834151581s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.831843621 +0000 UTC m=+126.107340561" watchObservedRunningTime="2026-02-02 00:12:06.834151581 +0000 UTC m=+126.109648511" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.844037 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.844725 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.344701001 +0000 UTC m=+126.620197921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.871594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" event={"ID":"e88c0487-caa2-44ee-a139-33b289b9fc2d","Type":"ContainerStarted","Data":"c9c1510e0e7a2b73e6633080724d52fc81d26026d11a94f526c06cccdb9f97fe"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.872593 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podStartSLOduration=102.872579899 podStartE2EDuration="1m42.872579899s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.871698846 +0000 UTC m=+126.147195776" watchObservedRunningTime="2026-02-02 00:12:06.872579899 +0000 UTC m=+126.148076819" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.892794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"9af916a3e2c690fac19e65958c6a59828797446e3c0964884e1bddea6549a167"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.893737 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-m7wqk" podStartSLOduration=102.893714949 podStartE2EDuration="1m42.893714949s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.893122273 +0000 UTC m=+126.168619213" watchObservedRunningTime="2026-02-02 00:12:06.893714949 +0000 UTC m=+126.169211879" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.900517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"f581a964a78f67535ae45c2872fba7b71f95b64da206093e101568faeea41f9a"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.909095 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:06 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:06 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:06 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.909155 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.918718 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"7048b856a9bd96fe898a5dc34cd39cba8845962301bb19480b77681b35124f3b"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.919215 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.935219 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-r7j49" podStartSLOduration=102.935189968 podStartE2EDuration="1m42.935189968s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.923634031 +0000 UTC m=+126.199130981" watchObservedRunningTime="2026-02-02 00:12:06.935189968 +0000 UTC m=+126.210686898" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.952390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:06 crc kubenswrapper[5108]: E0202 00:12:06.952999 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.452956678 +0000 UTC m=+126.728453618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.966444 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ft7zd" podStartSLOduration=102.966420165 podStartE2EDuration="1m42.966420165s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.965372766 +0000 UTC m=+126.240869706" watchObservedRunningTime="2026-02-02 00:12:06.966420165 +0000 UTC m=+126.241917095" Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.976023 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"cdb5b6136f9949f2b96c7eb6c9309f9ff4a2452f2041a46b18ca06b2be9bcbbd"} Feb 02 00:12:06 crc kubenswrapper[5108]: I0202 00:12:06.993050 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" podStartSLOduration=102.993025399 podStartE2EDuration="1m42.993025399s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:06.986858796 +0000 UTC m=+126.262355736" watchObservedRunningTime="2026-02-02 00:12:06.993025399 +0000 UTC m=+126.268522329" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.015897 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" podStartSLOduration=103.015875164 podStartE2EDuration="1m43.015875164s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:07.014304313 +0000 UTC m=+126.289801253" watchObservedRunningTime="2026-02-02 00:12:07.015875164 +0000 UTC m=+126.291372084" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.055055 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.055833 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.555791051 +0000 UTC m=+126.831287981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.098339 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46330: no serving certificate available for the kubelet" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.159736 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.159973 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.659932269 +0000 UTC m=+126.935429199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.160192 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.160596 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.660580676 +0000 UTC m=+126.936077606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.261618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.261856 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.761820556 +0000 UTC m=+127.037317486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.262039 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.262394 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.762386731 +0000 UTC m=+127.037883661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.346220 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.362960 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.363086 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.863058997 +0000 UTC m=+127.138555927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.363691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.364075 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.864065364 +0000 UTC m=+127.139562294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.466008 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.466255 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.966204959 +0000 UTC m=+127.241701889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.466687 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.467130 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:07.967113642 +0000 UTC m=+127.242610572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.567294 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.567486 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.06745364 +0000 UTC m=+127.342950560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.567815 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.568159 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.068144528 +0000 UTC m=+127.343641458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.669099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.669255 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.169180744 +0000 UTC m=+127.444677664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.669769 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.670176 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.17016315 +0000 UTC m=+127.445660080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.771003 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.771256 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.271202545 +0000 UTC m=+127.546699485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.771479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.771843 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.271823232 +0000 UTC m=+127.547320242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.873299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.873591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.373552755 +0000 UTC m=+127.649049685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.873745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.874103 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.37409348 +0000 UTC m=+127.649590410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.904201 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:07 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:07 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:07 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.904289 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.975936 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.976176 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.476135882 +0000 UTC m=+127.751632812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.976634 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:07 crc kubenswrapper[5108]: E0202 00:12:07.977043 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.477027655 +0000 UTC m=+127.752524585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.983932 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerStarted","Data":"fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4"} Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.984933 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.993204 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" event={"ID":"45594040-ee30-4578-aa8c-a9e8ef858c06","Type":"ContainerStarted","Data":"790fe13975c980ebcb7c76c8e69d8c4b5bd603664d7da8b1d08e4ed422450fae"} Feb 02 00:12:07 crc kubenswrapper[5108]: I0202 00:12:07.999239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" event={"ID":"74feb297-18d1-4e3a-b077-779e202c89da","Type":"ContainerStarted","Data":"004c454c509e890a028ad24ad5589c03a218efc7d31b0886bb5261bf27c9327b"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.008147 5108 generic.go:358] "Generic (PLEG): container finished" podID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerID="f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba" exitCode=0 Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.008272 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerDied","Data":"f55a73b195fc4ff73f7a158b317a4f091e335545c9c7fff202d86972324de8ba"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.012015 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" event={"ID":"8490096f-f230-4160-bb09-338c9fa9f7ca","Type":"ContainerStarted","Data":"b908f275aae1aaf7c4c562e827fe1b58eaa6c5a439a4b12c6a5f9a93dd3d59dc"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.015943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" event={"ID":"99916b4a-423b-4db6-a912-cc2ef585eab3","Type":"ContainerStarted","Data":"95ae580227d64e996fd6c4eb214373a572187c0e5e5ddc76ce8ae839e3a10f1c"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.017853 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" event={"ID":"9b79d203-f1c7-4523-9d97-51181cdb26d2","Type":"ContainerStarted","Data":"dd17f91a9e7bf2761e4b90fddb30f8edfdcf12c9b8105681db073bcfdf03e7ee"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.020985 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.022877 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9bzk" event={"ID":"f864fdce-3b6b-4ba2-9159-12c2d21f2601","Type":"ContainerStarted","Data":"a7401cc9d5ec136d233b3818be7092e4551e67126c925ad9d5a73a7469eeba49"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.023013 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.025504 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" event={"ID":"2c1108f2-209c-4d4c-affc-fe8fbfd27cca","Type":"ContainerStarted","Data":"be240166e4ca3f513a63efcc02aeb296bb9fb2204003bd906232438ea6aa0a8a"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.028059 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tkjzb" event={"ID":"27d783b3-6f7d-4f4d-b054-225bfcb98fd5","Type":"ContainerStarted","Data":"28a237d432d0c45ff9af8f1e618332c53ebefb605b5dbbb846fdce9c29d4ab4c"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030250 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"4385ec25f9530507d880fa25979bb56c026c5e36ad48bc8a34a7213b4081acf6"} Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030402 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-h2slm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.030495 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" podUID="f7f3fbf6-f8a5-4122-8f5b-fed0e4cf59dc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.032022 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-z28zc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.032056 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" podUID="6c411323-7b32-4e2b-a2b9-c6b63abeb1ea" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035394 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035448 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035589 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmvtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.035670 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044103 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podStartSLOduration=9.044079151 podStartE2EDuration="9.044079151s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.041997136 +0000 UTC m=+127.317494066" watchObservedRunningTime="2026-02-02 00:12:08.044079151 +0000 UTC m=+127.319576081" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044520 5108 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-cvtnf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.044706 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" podUID="2b96d2a0-be27-428e-8bfd-f78a09feb756" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.077307 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-mztxr" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.081195 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.085818 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.585792255 +0000 UTC m=+127.861289345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.103162 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" podStartSLOduration=104.103143105 podStartE2EDuration="1m44.103143105s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.100607238 +0000 UTC m=+127.376104188" watchObservedRunningTime="2026-02-02 00:12:08.103143105 +0000 UTC m=+127.378640035" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.157874 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-wb8mw" podStartSLOduration=104.157844304 podStartE2EDuration="1m44.157844304s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.155979114 +0000 UTC m=+127.431476054" watchObservedRunningTime="2026-02-02 00:12:08.157844304 +0000 UTC m=+127.433341234" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.186501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.187711 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.687684454 +0000 UTC m=+127.963181594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.188941 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-6jnxl" podStartSLOduration=104.188912666 podStartE2EDuration="1m44.188912666s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.186451581 +0000 UTC m=+127.461948541" watchObservedRunningTime="2026-02-02 00:12:08.188912666 +0000 UTC m=+127.464409596" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.255983 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-x5pzk" podStartSLOduration=104.255964832 podStartE2EDuration="1m44.255964832s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.225928356 +0000 UTC m=+127.501425296" watchObservedRunningTime="2026-02-02 00:12:08.255964832 +0000 UTC m=+127.531461762" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.284885 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q9bzk" podStartSLOduration=9.284868357 podStartE2EDuration="9.284868357s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.282306359 +0000 UTC m=+127.557803309" watchObservedRunningTime="2026-02-02 00:12:08.284868357 +0000 UTC m=+127.560365287" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.294908 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.295334 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.795314624 +0000 UTC m=+128.070811554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.311166 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-9l4wv" podStartSLOduration=104.311131693 podStartE2EDuration="1m44.311131693s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:08.308554995 +0000 UTC m=+127.584051935" watchObservedRunningTime="2026-02-02 00:12:08.311131693 +0000 UTC m=+127.586628623" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.397097 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.397611 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.897593603 +0000 UTC m=+128.173090543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.424139 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46346: no serving certificate available for the kubelet" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.498861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.499446 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:08.999427099 +0000 UTC m=+128.274924029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.601260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.601715 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.101695337 +0000 UTC m=+128.377192267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.702603 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.702711 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.202688661 +0000 UTC m=+128.478185591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.703058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.703372 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.20336465 +0000 UTC m=+128.478861580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.768186 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.805395 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.805506 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.305483153 +0000 UTC m=+128.580980083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.805844 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.806218 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.306206942 +0000 UTC m=+128.581703882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.901820 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:08 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:08 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:08 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.901948 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:08 crc kubenswrapper[5108]: I0202 00:12:08.907355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:08 crc kubenswrapper[5108]: E0202 00:12:08.907733 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.4077025 +0000 UTC m=+128.683199430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.008889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.009263 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.509248919 +0000 UTC m=+128.784745849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.040324 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.040372 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.110000 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.110497 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.6104773 +0000 UTC m=+128.885974230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.212583 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.217760 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.71774555 +0000 UTC m=+128.993242480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.313590 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.314011 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.813991969 +0000 UTC m=+129.089488899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.415456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.415872 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:09.915857176 +0000 UTC m=+129.191354096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.481401 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-h2slm" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.483444 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cvtnf" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.530985 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.531351 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.031330664 +0000 UTC m=+129.306827594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.632900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.634097 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.134077655 +0000 UTC m=+129.409574585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.646043 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.734097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.734286 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.234250088 +0000 UTC m=+129.509747008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.734788 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.735188 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.235168462 +0000 UTC m=+129.510665382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.751790 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752357 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752373 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.752478 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8285a46b-171e-4c8c-ba54-5ab062df76fc" containerName="collect-profiles" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.778488 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.778660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.784120 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.784319 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.835795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836063 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836114 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") pod \"8285a46b-171e-4c8c-ba54-5ab062df76fc\" (UID: \"8285a46b-171e-4c8c-ba54-5ab062df76fc\") " Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.836403 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.336365741 +0000 UTC m=+129.611862671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.836935 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.837183 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.837508 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.337500572 +0000 UTC m=+129.612997502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.849973 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp" (OuterVolumeSpecName: "kube-api-access-xcnnp") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "kube-api-access-xcnnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.855712 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.855861 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8285a46b-171e-4c8c-ba54-5ab062df76fc" (UID: "8285a46b-171e-4c8c-ba54-5ab062df76fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.894686 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.894879 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903509 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:09 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:09 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:09 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903627 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.903853 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.938943 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939021 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcnnp\" (UniqueName: \"kubernetes.io/projected/8285a46b-171e-4c8c-ba54-5ab062df76fc-kube-api-access-xcnnp\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939035 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8285a46b-171e-4c8c-ba54-5ab062df76fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: I0202 00:12:09.939044 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8285a46b-171e-4c8c-ba54-5ab062df76fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:09 crc kubenswrapper[5108]: E0202 00:12:09.939128 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.439109162 +0000 UTC m=+129.714606092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.001995 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.008618 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.012870 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.015257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040757 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040811 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.040936 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041161 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.041637 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.541621656 +0000 UTC m=+129.817118586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.041786 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.052978 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" gracePeriod=30 Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.053436 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.054508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499840-qxdlz" event={"ID":"8285a46b-171e-4c8c-ba54-5ab062df76fc","Type":"ContainerDied","Data":"791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d"} Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.054550 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791e301889cececb220b16971e4a6f533193ec24be50cd2c08fffccb59186f0d" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.073287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.102188 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.142662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143044 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143080 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143294 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143436 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.143579 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.643557456 +0000 UTC m=+129.919054376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.143712 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.177344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.186517 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.218517 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244506 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244870 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.244894 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245448 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245492 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245687 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.245734 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.247728 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.747518889 +0000 UTC m=+130.023015819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.248395 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.249366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.250043 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.251795 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.321610 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"certified-operators-52cvp\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.339736 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.347727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348043 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.348279 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.349740 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.849709835 +0000 UTC m=+130.125206765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.403814 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451682 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.451927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.452558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.452815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.453521 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:10.953502603 +0000 UTC m=+130.228999533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.482348 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.482575 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.488596 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"community-operators-8l8nm\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.493152 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 02 00:12:10 crc kubenswrapper[5108]: W0202 00:12:10.504799 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf6bc5fe_38fb_4fd6_b9a9_57172b79a6ca.slice/crio-f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c WatchSource:0}: Error finding container f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c: Status 404 returned error can't find the container with id f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.553642 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.554572 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.054494447 +0000 UTC m=+130.329991377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.554810 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.555485 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.055473264 +0000 UTC m=+130.330970194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.599069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.601291 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-wbv6f container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]log ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]etcd ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/max-in-flight-filter ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 00:12:10 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startinformers ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 00:12:10 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 00:12:10 crc kubenswrapper[5108]: livez check failed Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.601333 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" podUID="8490096f-f230-4160-bb09-338c9fa9f7ca" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.641670 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.655902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.656441 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.656605 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.156575631 +0000 UTC m=+130.432072561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758445 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.758613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.758919 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.25890516 +0000 UTC m=+130.534402090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.759474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.759917 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.790520 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.790950 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.794440 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"certified-operators-9ss2j\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.796850 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.806399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:12:10 crc kubenswrapper[5108]: W0202 00:12:10.831339 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef823528_7549_4a91_83c9_e5b243ecb37c.slice/crio-f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf WatchSource:0}: Error finding container f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf: Status 404 returned error can't find the container with id f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.844507 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.860785 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.861109 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.361090677 +0000 UTC m=+130.636587607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.886683 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.934839 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:10 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:10 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.934904 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.962968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963110 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.963144 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:10 crc kubenswrapper[5108]: E0202 00:12:10.963502 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.463488128 +0000 UTC m=+130.738985058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.986939 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.987054 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.999393 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-9pw49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 02 00:12:10 crc kubenswrapper[5108]: I0202 00:12:10.999467 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-9pw49" podUID="6d992c02-f6cc-4488-9108-a72c6c2f3dcf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.019712 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fn572" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.037905 5108 ???:1] "http: TLS handshake error from 192.168.126.11:46360: no serving certificate available for the kubelet" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.065203 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.065600 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.565567261 +0000 UTC m=+130.841064191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066585 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.066794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.067919 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.567898733 +0000 UTC m=+130.843395743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.068902 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.072626 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.122389 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"community-operators-jgmw6\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.129713 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.129989 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerStarted","Data":"70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.132520 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.139105 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerStarted","Data":"f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.168352 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.168686 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.668648751 +0000 UTC m=+130.944145681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.169092 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.169741 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.669733559 +0000 UTC m=+130.945230489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.187943 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerStarted","Data":"f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf"} Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.257620 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.270362 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.271125 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.771104844 +0000 UTC m=+131.046601774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.372145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.372687 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.872647963 +0000 UTC m=+131.148144893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.474021 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.474285 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.974241603 +0000 UTC m=+131.249738533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.474716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.475050 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:11.975041474 +0000 UTC m=+131.250538404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.576341 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.576541 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.076509711 +0000 UTC m=+131.352006641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.665523 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:12:11 crc kubenswrapper[5108]: W0202 00:12:11.678413 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41859985_fc1d_4d4e_bbe8_b0a99955ac0a.slice/crio-6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be WatchSource:0}: Error finding container 6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be: Status 404 returned error can't find the container with id 6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.679770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.680191 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.180173736 +0000 UTC m=+131.455670666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.780647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.780918 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.280873022 +0000 UTC m=+131.556369962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.781085 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.781597 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.281567932 +0000 UTC m=+131.557065112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.883958 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.884110 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.384084456 +0000 UTC m=+131.659581386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.884443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.884842 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.384830435 +0000 UTC m=+131.660327365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.902474 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:11 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:11 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:11 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.902564 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.986194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.986509 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.486454797 +0000 UTC m=+131.761951757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:11 crc kubenswrapper[5108]: I0202 00:12:11.987662 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:11 crc kubenswrapper[5108]: E0202 00:12:11.988079 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.488060179 +0000 UTC m=+131.763557289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.088493 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.088746 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.588707074 +0000 UTC m=+131.864204004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.089633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.090327 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.590296577 +0000 UTC m=+131.865793507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.136921 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef823528_7549_4a91_83c9_e5b243ecb37c.slice/crio-9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00.scope\": RecentStats: unable to find data in memory cache]" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.189108 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.191899 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.192517 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.692493962 +0000 UTC m=+131.967990912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.228214 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00" exitCode=0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.236085 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c" exitCode=0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272002 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272066 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"affca2f46576140bfc2f7fa793d8be2e955c260a936863a0aaaa74ff13f67148"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272079 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerStarted","Data":"2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272092 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerStarted","Data":"f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272104 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272123 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"eb0a00b12767c4ff782045029b2e342458acfc4bf6b005b9598c899c329f4a88"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272147 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"bf1f4e8893cf7d38c33c0c17e67ab9bd9445bacbc6cedb29875eaf455b2ef485"} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.272298 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.276073 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295085 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.295159 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.296057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.296715 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.796688091 +0000 UTC m=+132.072185021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.299050 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.299029233 podStartE2EDuration="3.299029233s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:12.295268884 +0000 UTC m=+131.570765814" watchObservedRunningTime="2026-02-02 00:12:12.299029233 +0000 UTC m=+131.574526163" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.371796 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=3.37176781 podStartE2EDuration="3.37176781s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:12.370786464 +0000 UTC m=+131.646283404" watchObservedRunningTime="2026-02-02 00:12:12.37176781 +0000 UTC m=+131.647264740" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398216 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.398381 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.898350224 +0000 UTC m=+132.173847144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398835 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398861 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.398954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.399293 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:12.899272628 +0000 UTC m=+132.174769558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.399934 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.400150 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.441956 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"redhat-marketplace-wzh6n\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.503213 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.503655 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.003620911 +0000 UTC m=+132.279117841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.504020 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.504424 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.004415853 +0000 UTC m=+132.279912783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.604832 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.606017 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.606283 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.106240989 +0000 UTC m=+132.381737919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.606646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.607268 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.107255075 +0000 UTC m=+132.382752005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.619420 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.645643 5108 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.664631 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.671413 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.705679 5108 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T00:12:12.645668783Z","UUID":"98550e70-daa2-4fdb-9e32-d2c134d8977f","Handler":null,"Name":"","Endpoint":""} Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708124 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.708273 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.20824529 +0000 UTC m=+132.483742220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.708805 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: E0202 00:12:12.709324 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-02 00:12:13.209314888 +0000 UTC m=+132.484811819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mjr86" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.722503 5108 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.722767 5108 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.812097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813580 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.813863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.814131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.814282 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.831033 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.845168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"redhat-marketplace-pv288\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.895855 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.903069 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:12 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:12 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:12 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.903179 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.918576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.935791 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.935854 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:12 crc kubenswrapper[5108]: I0202 00:12:12.957440 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.032605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mjr86\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.196797 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.233017 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.241810 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.244726 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.261020 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.261357 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:12:13 crc kubenswrapper[5108]: W0202 00:12:13.292048 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7a5230e_8980_4561_bfb3_015283fcbaa4.slice/crio-ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e WatchSource:0}: Error finding container ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e: Status 404 returned error can't find the container with id ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.297538 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.319406 5108 generic.go:358] "Generic (PLEG): container finished" podID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerID="f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.319575 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerDied","Data":"f53470f0349cc6b8707af3c2bc15c0525494aead25f907bb884298efb59e0e9b"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.325845 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.326103 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.331671 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.331831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.332785 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.333036 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.333167 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.340683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"9d5276904f486560300929532b44d4b52eb74aa22d216eeb7926559631800e8b"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.352874 5108 generic.go:358] "Generic (PLEG): container finished" podID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerID="2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866" exitCode=0 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.352942 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerDied","Data":"2926e9efd55ee24f9bd84c1f1c357729c5787a1065057fec02eee0a89b6c7866"} Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.402716 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436254 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.436696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.437168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.450317 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.450681 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.488605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"redhat-operators-g4h5k\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.597559 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.610362 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.615045 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.655931 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.656059 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.742418 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.743111 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.743298 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.790089 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853548 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853744 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.853812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.854431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.855026 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.895900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"redhat-operators-pwwt9\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900487 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:13 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:13 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:13 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900559 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.900495 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:12:13 crc kubenswrapper[5108]: W0202 00:12:13.912736 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8f756d_4492_4dfc_ae46_80bb93dd6d86.slice/crio-91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8 WatchSource:0}: Error finding container 91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8: Status 404 returned error can't find the container with id 91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8 Feb 02 00:12:13 crc kubenswrapper[5108]: I0202 00:12:13.961667 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.077337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366539 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139" exitCode=0 Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.366724 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerStarted","Data":"ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.375010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" event={"ID":"917a1c8b-59d5-4acb-8cef-91979326a7d1","Type":"ContainerStarted","Data":"4e64f9652f0b240af997a0094d5833499b1a766a26c92b2aac629ab4f3330dfb"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.390639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerStarted","Data":"527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.390717 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerStarted","Data":"1447dcac9c96a7085eca20122133eb4f717b3af0915a27a86280d315ab8e69c0"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.391314 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.392957 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d" exitCode=0 Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.393034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.393054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerStarted","Data":"a1c222f8566d6eeedc3932944e3dca34068066d180f7b69bf128f26076481b1b"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.415170 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.415274 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8"} Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.442082 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podStartSLOduration=110.442062892 podStartE2EDuration="1m50.442062892s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:14.438123185 +0000 UTC m=+133.713620135" watchObservedRunningTime="2026-02-02 00:12:14.442062892 +0000 UTC m=+133.717559822" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.480845 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hnl48" podStartSLOduration=15.480812146 podStartE2EDuration="15.480812146s" podCreationTimestamp="2026-02-02 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:14.472197062 +0000 UTC m=+133.747694022" watchObservedRunningTime="2026-02-02 00:12:14.480812146 +0000 UTC m=+133.756309076" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.637503 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:12:14 crc kubenswrapper[5108]: W0202 00:12:14.651832 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe89a3e_59b8_4707_863b_ed23bea6f273.slice/crio-1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c WatchSource:0}: Error finding container 1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c: Status 404 returned error can't find the container with id 1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.729969 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.772530 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") pod \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.772692 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") pod \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\" (UID: \"ecff25a2-faeb-4efb-9e50-b8981535bbb3\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.773294 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ecff25a2-faeb-4efb-9e50-b8981535bbb3" (UID: "ecff25a2-faeb-4efb-9e50-b8981535bbb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.781479 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ecff25a2-faeb-4efb-9e50-b8981535bbb3" (UID: "ecff25a2-faeb-4efb-9e50-b8981535bbb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.803562 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.873998 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") pod \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874108 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") pod \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\" (UID: \"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca\") " Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874107 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" (UID: "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874550 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874573 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecff25a2-faeb-4efb-9e50-b8981535bbb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.874585 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.881605 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" (UID: "af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.904626 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:14 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:14 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:14 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.904742 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:14 crc kubenswrapper[5108]: I0202 00:12:14.976384 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.222026 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.227807 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wbv6f" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.428502 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4" exitCode=0 Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.428708 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431023 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ecff25a2-faeb-4efb-9e50-b8981535bbb3","Type":"ContainerDied","Data":"70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431053 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70144879ca1801ad320f413cacebe5723f4e76015c3286fd5327879285141829" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.431133 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca","Type":"ContainerDied","Data":"f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438380 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f41f092b89bf3ce8052d25ff9ab53c4f07a572354f7ce3d35adedaba04defb8c" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.438464 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.447644 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" exitCode=0 Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.447701 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.448016 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c"} Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.714797 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-znc99" Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.898846 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:15 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:15 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:15 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:15 crc kubenswrapper[5108]: I0202 00:12:15.898967 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.187590 5108 ???:1] "http: TLS handshake error from 192.168.126.11:40198: no serving certificate available for the kubelet" Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.898546 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:16 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:16 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:16 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:16 crc kubenswrapper[5108]: I0202 00:12:16.899083 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:17 crc kubenswrapper[5108]: I0202 00:12:17.899927 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:17 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:17 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:17 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:17 crc kubenswrapper[5108]: I0202 00:12:17.900007 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.989731 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.991642 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.995661 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:17 crc kubenswrapper[5108]: E0202 00:12:17.995728 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.037906 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.044013 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q9bzk" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.046187 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-z28zc" Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.900941 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:18 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:18 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:18 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:18 crc kubenswrapper[5108]: I0202 00:12:18.901043 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.038011 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.038097 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.897293 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:19 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:19 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:19 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:19 crc kubenswrapper[5108]: I0202 00:12:19.897612 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.897809 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:20 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:20 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:20 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.897893 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.987052 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-9pw49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 02 00:12:20 crc kubenswrapper[5108]: I0202 00:12:20.987141 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-9pw49" podUID="6d992c02-f6cc-4488-9108-a72c6c2f3dcf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 02 00:12:21 crc kubenswrapper[5108]: I0202 00:12:21.897748 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:21 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:21 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:21 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:21 crc kubenswrapper[5108]: I0202 00:12:21.898520 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:22 crc kubenswrapper[5108]: I0202 00:12:22.896696 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4zf25 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 00:12:22 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 02 00:12:22 crc kubenswrapper[5108]: [+]process-running ok Feb 02 00:12:22 crc kubenswrapper[5108]: healthz check failed Feb 02 00:12:22 crc kubenswrapper[5108]: I0202 00:12:22.897308 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" podUID="031f8213-ba02-4add-9d14-c3a995a10fa9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.450519 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.450625 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.897549 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:23 crc kubenswrapper[5108]: I0202 00:12:23.901755 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-4zf25" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.285030 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.291790 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386216 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386327 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.386467 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.393146 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.398912 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.435028 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.456747 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.472936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.487767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.491333 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f77c18f0-131e-482e-8e09-602b39b0c163-metrics-certs\") pod \"network-metrics-daemon-26ppl\" (UID: \"f77c18f0-131e-482e-8e09-602b39b0c163\") " pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.495255 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26ppl" Feb 02 00:12:24 crc kubenswrapper[5108]: I0202 00:12:24.770000 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 02 00:12:25 crc kubenswrapper[5108]: I0202 00:12:25.616619 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:12:26 crc kubenswrapper[5108]: I0202 00:12:26.461852 5108 ???:1] "http: TLS handshake error from 192.168.126.11:41080: no serving certificate available for the kubelet" Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.987796 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.990439 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.991837 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:27 crc kubenswrapper[5108]: E0202 00:12:27.992259 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:29 crc kubenswrapper[5108]: I0202 00:12:29.038283 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:29 crc kubenswrapper[5108]: I0202 00:12:29.038377 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:30 crc kubenswrapper[5108]: I0202 00:12:30.993381 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:31 crc kubenswrapper[5108]: I0202 00:12:31.000571 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-9pw49" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451186 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451286 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.451340 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452075 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452284 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452624 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} pod="openshift-console/downloads-747b44746d-cp5z2" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 00:12:33 crc kubenswrapper[5108]: I0202 00:12:33.452701 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" containerID="cri-o://eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed" gracePeriod=2 Feb 02 00:12:34 crc kubenswrapper[5108]: I0202 00:12:34.601833 5108 generic.go:358] "Generic (PLEG): container finished" podID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerID="eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed" exitCode=0 Feb 02 00:12:34 crc kubenswrapper[5108]: I0202 00:12:34.602099 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerDied","Data":"eeda0735367749aa2e538d9f6b415570b629014d0b7c343ab8f25cae42b998ed"} Feb 02 00:12:36 crc kubenswrapper[5108]: I0202 00:12:36.465156 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.989184 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.991524 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.993335 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:37 crc kubenswrapper[5108]: E0202 00:12:37.993451 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:39 crc kubenswrapper[5108]: I0202 00:12:39.047165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-f55br" Feb 02 00:12:40 crc kubenswrapper[5108]: I0202 00:12:40.647597 5108 generic.go:358] "Generic (PLEG): container finished" podID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerID="662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99" exitCode=0 Feb 02 00:12:40 crc kubenswrapper[5108]: I0202 00:12:40.647725 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerDied","Data":"662689ee61fccec648a90a4375a519042cf1cb9c27ef807a261aa5cd1d207f99"} Feb 02 00:12:43 crc kubenswrapper[5108]: I0202 00:12:43.452403 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:43 crc kubenswrapper[5108]: I0202 00:12:43.452787 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.762520 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764880 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764916 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764939 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.764952 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.765181 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ecff25a2-faeb-4efb-9e50-b8981535bbb3" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.765201 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6bc5fe-38fb-4fd6-b9a9-57172b79a6ca" containerName="pruner" Feb 02 00:12:46 crc kubenswrapper[5108]: I0202 00:12:46.971951 5108 ???:1] "http: TLS handshake error from 192.168.126.11:40958: no serving certificate available for the kubelet" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.674940 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.679583 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.679587 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.683938 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.790453 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.790505 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892481 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.892629 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.914276 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.987786 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.989841 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.991909 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:47 crc kubenswrapper[5108]: E0202 00:12:47.991980 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:47 crc kubenswrapper[5108]: I0202 00:12:47.995329 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:12:52 crc kubenswrapper[5108]: I0202 00:12:52.376533 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:53 crc kubenswrapper[5108]: I0202 00:12:53.458993 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:53 crc kubenswrapper[5108]: I0202 00:12:53.459069 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.198892 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.200676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305298 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.305362 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.406850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.406958 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.407260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.435535 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"installer-12-crc\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.520865 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:12:54 crc kubenswrapper[5108]: I0202 00:12:54.927702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.015424 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") pod \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.017758 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") pod \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\" (UID: \"dcbaa597-5b18-4219-b757-5f10e86a2c1c\") " Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.018678 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca" (OuterVolumeSpecName: "serviceca") pod "dcbaa597-5b18-4219-b757-5f10e86a2c1c" (UID: "dcbaa597-5b18-4219-b757-5f10e86a2c1c"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.023856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn" (OuterVolumeSpecName: "kube-api-access-2l8sn") pod "dcbaa597-5b18-4219-b757-5f10e86a2c1c" (UID: "dcbaa597-5b18-4219-b757-5f10e86a2c1c"). InnerVolumeSpecName "kube-api-access-2l8sn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.119439 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcbaa597-5b18-4219-b757-5f10e86a2c1c-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.119498 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2l8sn\" (UniqueName: \"kubernetes.io/projected/dcbaa597-5b18-4219-b757-5f10e86a2c1c-kube-api-access-2l8sn\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.756731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29499840-njc6g" event={"ID":"dcbaa597-5b18-4219-b757-5f10e86a2c1c","Type":"ContainerDied","Data":"ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9"} Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.757197 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1dda4ca19e44a7d7547556112d79c7a9164fc1db4386291660d7d4020c24e9" Feb 02 00:12:55 crc kubenswrapper[5108]: I0202 00:12:55.757081 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29499840-njc6g" Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.137271 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26ppl"] Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.196373 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77c18f0_131e_482e_8e09_602b39b0c163.slice/crio-3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32 WatchSource:0}: Error finding container 3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32: Status 404 returned error can't find the container with id 3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32 Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.268023 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985 WatchSource:0}: Error finding container dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985: Status 404 returned error can't find the container with id dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985 Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.275440 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb WatchSource:0}: Error finding container 8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb: Status 404 returned error can't find the container with id 8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb Feb 02 00:12:56 crc kubenswrapper[5108]: W0202 00:12:56.317899 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d WatchSource:0}: Error finding container dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d: Status 404 returned error can't find the container with id dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.460626 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.495101 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.771693 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.771827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.790192 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.807569 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerStarted","Data":"023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.809790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"dfd3151b97b0e177c54c648b665c74ea1174a7aeb7ce6fb98c3c71b656998985"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.811759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"8f9d7ec5a879486c86949396dc60b009f59c36025832daad1cb00b445f4a7cfb"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.829215 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.838850 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.839246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.848824 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.860634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"3ac04311d7163033509bae8a3218d2eb5fcc9f8518f664ef5b0e18f864193e32"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.865051 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-cp5z2" event={"ID":"07d89198-8b8e-4edc-96b8-05b6df5194f6","Type":"ContainerStarted","Data":"2af24917791832666af442ed7eb6d64dd5c5d3f93ac4c8f51096e3bbf48aaf59"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.868532 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504" exitCode=0 Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.868594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.877996 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.924361 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.933969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"dc6ded92c7d6ae957301b4c12b45c5dcfbfef9d21156d6ec0c1089ca18e41a3d"} Feb 02 00:12:56 crc kubenswrapper[5108]: I0202 00:12:56.947427 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerStarted","Data":"963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593038 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593598 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.593661 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.983058 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba" exitCode=0 Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.983160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.985349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"623ff5d876fd59264a0c09ab3b74d07e5e3e1e4ad9feb42b39e38f0278a89d40"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.985502 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.988606 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989532 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989814 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 02 00:12:57 crc kubenswrapper[5108]: E0202 00:12:57.989854 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.991174 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerStarted","Data":"491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871"} Feb 02 00:12:57 crc kubenswrapper[5108]: I0202 00:12:57.996275 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerStarted","Data":"7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:57.999970 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.000098 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.001898 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerStarted","Data":"4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.004714 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"215e28801ee330962c407d77f1324c3625654baa1f13e0944ef2939325bbcbfe"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.011918 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"78c922aa5d47d22d17e1e520318325fce6565e814630f7dd12b068d3f91b5458"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.016994 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.017095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022674 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022752 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" exitCode=137 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.022873 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerDied","Data":"fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.030009 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wzh6n" podStartSLOduration=4.617476174 podStartE2EDuration="46.029987233s" podCreationTimestamp="2026-02-02 00:12:12 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.367824532 +0000 UTC m=+133.643321462" lastFinishedPulling="2026-02-02 00:12:55.780335591 +0000 UTC m=+175.055832521" observedRunningTime="2026-02-02 00:12:58.029341594 +0000 UTC m=+177.304838544" watchObservedRunningTime="2026-02-02 00:12:58.029987233 +0000 UTC m=+177.305484153" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.030088 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerStarted","Data":"f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.037852 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.037949 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.042340 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"db09f2b79f118c53d87217f9d083d12994294c3db45efe4ee167dce6c7a0257f"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.047451 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=6.047431787 podStartE2EDuration="6.047431787s" podCreationTimestamp="2026-02-02 00:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:58.043445908 +0000 UTC m=+177.318942858" watchObservedRunningTime="2026-02-02 00:12:58.047431787 +0000 UTC m=+177.322928717" Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.054008 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerStarted","Data":"44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.060355 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1" exitCode=0 Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.060447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1"} Feb 02 00:12:58 crc kubenswrapper[5108]: I0202 00:12:58.062516 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=12.062498167 podStartE2EDuration="12.062498167s" podCreationTimestamp="2026-02-02 00:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:12:58.058171839 +0000 UTC m=+177.333668789" watchObservedRunningTime="2026-02-02 00:12:58.062498167 +0000 UTC m=+177.337995087" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.156888 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-52cvp" podStartSLOduration=6.734009918 podStartE2EDuration="50.156865859s" podCreationTimestamp="2026-02-02 00:12:09 +0000 UTC" firstStartedPulling="2026-02-02 00:12:12.276819265 +0000 UTC m=+131.552316195" lastFinishedPulling="2026-02-02 00:12:55.699675216 +0000 UTC m=+174.975172136" observedRunningTime="2026-02-02 00:12:59.153806235 +0000 UTC m=+178.429303195" watchObservedRunningTime="2026-02-02 00:12:59.156865859 +0000 UTC m=+178.432362799" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.523557 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.523643 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596011 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596257 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596316 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") pod \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\" (UID: \"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b\") " Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.596647 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready" (OuterVolumeSpecName: "ready") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597266 5108 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-ready\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597295 5108 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.597366 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.611372 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46" (OuterVolumeSpecName: "kube-api-access-2xl46") pod "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" (UID: "ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b"). InnerVolumeSpecName "kube-api-access-2xl46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.699472 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xl46\" (UniqueName: \"kubernetes.io/projected/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-kube-api-access-2xl46\") on node \"crc\" DevicePath \"\"" Feb 02 00:12:59 crc kubenswrapper[5108]: I0202 00:12:59.699552 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.082094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerStarted","Data":"a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.087160 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerID="4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978" exitCode=0 Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.087415 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerDied","Data":"4625d2b7c738f7c93691f6690d5bf737225154026be3eb28dcad721028323978"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090129 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ng2x6_ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/kube-multus-additional-cni-plugins/0.log" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090406 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" event={"ID":"ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b","Type":"ContainerDied","Data":"b7ccd63409a2599caa2a1d6a430c1e67af5f138dd3ea1e54d57df99b1d6cd73a"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090574 5108 scope.go:117] "RemoveContainer" containerID="fafb2432d8f7f07422a91f753012653b03a0d5ff2d26b57ed9bfae68ee8e15c4" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.090466 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ng2x6" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.094573 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26ppl" event={"ID":"f77c18f0-131e-482e-8e09-602b39b0c163","Type":"ContainerStarted","Data":"f5df6cd7478c7ba7f695fd1ad9afb726bfa5ba738bd0890317ffd54325afc4f1"} Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.150373 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.154365 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ng2x6"] Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.204986 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.205163 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.267513 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pv288" podStartSLOduration=6.700227711 podStartE2EDuration="48.267490742s" podCreationTimestamp="2026-02-02 00:12:12 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.393610333 +0000 UTC m=+133.669107263" lastFinishedPulling="2026-02-02 00:12:55.960873364 +0000 UTC m=+175.236370294" observedRunningTime="2026-02-02 00:13:00.265577841 +0000 UTC m=+179.541074821" watchObservedRunningTime="2026-02-02 00:13:00.267490742 +0000 UTC m=+179.542987692" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.864699 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:00 crc kubenswrapper[5108]: I0202 00:13:00.865118 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.106790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerStarted","Data":"3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7"} Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.109855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerStarted","Data":"491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d"} Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.133620 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.134016 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.136495 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgmw6" podStartSLOduration=8.688598725 podStartE2EDuration="51.136476741s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:13.332449475 +0000 UTC m=+132.607946395" lastFinishedPulling="2026-02-02 00:12:55.780327481 +0000 UTC m=+175.055824411" observedRunningTime="2026-02-02 00:13:01.133708386 +0000 UTC m=+180.409205326" watchObservedRunningTime="2026-02-02 00:13:01.136476741 +0000 UTC m=+180.411973671" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.166624 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9ss2j" podStartSLOduration=7.478698147 podStartE2EDuration="51.16659413s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:12.276762794 +0000 UTC m=+131.552259724" lastFinishedPulling="2026-02-02 00:12:55.964658767 +0000 UTC m=+175.240155707" observedRunningTime="2026-02-02 00:13:01.165618294 +0000 UTC m=+180.441115224" watchObservedRunningTime="2026-02-02 00:13:01.16659413 +0000 UTC m=+180.442091050" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.190848 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-26ppl" podStartSLOduration=157.190806069 podStartE2EDuration="2m37.190806069s" podCreationTimestamp="2026-02-02 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:13:01.186110311 +0000 UTC m=+180.461607251" watchObservedRunningTime="2026-02-02 00:13:01.190806069 +0000 UTC m=+180.466302999" Feb 02 00:13:01 crc kubenswrapper[5108]: I0202 00:13:01.567824 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" path="/var/lib/kubelet/pods/ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b/volumes" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.053281 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-52cvp" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:02 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:02 crc kubenswrapper[5108]: > Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.122442 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerStarted","Data":"0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c"} Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.147943 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4h5k" podStartSLOduration=7.791521844 podStartE2EDuration="49.147919246s" podCreationTimestamp="2026-02-02 00:12:13 +0000 UTC" firstStartedPulling="2026-02-02 00:12:14.42765397 +0000 UTC m=+133.703150900" lastFinishedPulling="2026-02-02 00:12:55.784051372 +0000 UTC m=+175.059548302" observedRunningTime="2026-02-02 00:13:02.144198714 +0000 UTC m=+181.419695654" watchObservedRunningTime="2026-02-02 00:13:02.147919246 +0000 UTC m=+181.423416176" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.317308 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jgmw6" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:02 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:02 crc kubenswrapper[5108]: > Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.503462 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.647668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") pod \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.647926 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") pod \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\" (UID: \"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0\") " Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.648092 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" (UID: "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.648332 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.655547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" (UID: "fa0c4e3b-102b-4208-9aea-f2c48cf52ac0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.672497 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.672600 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.735424 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.749402 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa0c4e3b-102b-4208-9aea-f2c48cf52ac0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.958270 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:02 crc kubenswrapper[5108]: I0202 00:13:02.958359 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.012556 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.131100 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerStarted","Data":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133255 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"fa0c4e3b-102b-4208-9aea-f2c48cf52ac0","Type":"ContainerDied","Data":"023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc"} Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133431 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023fb9b38bbdab192bf28e7e40fd7ee26699120e07f3c8523c03dd10c67cacbc" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.133357 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.161246 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwwt9" podStartSLOduration=9.616507328 podStartE2EDuration="50.161204231s" podCreationTimestamp="2026-02-02 00:12:13 +0000 UTC" firstStartedPulling="2026-02-02 00:12:15.449634322 +0000 UTC m=+134.725131252" lastFinishedPulling="2026-02-02 00:12:55.994331225 +0000 UTC m=+175.269828155" observedRunningTime="2026-02-02 00:13:03.157182492 +0000 UTC m=+182.432679432" watchObservedRunningTime="2026-02-02 00:13:03.161204231 +0000 UTC m=+182.436701171" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.181187 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8l8nm" podStartSLOduration=10.551330186 podStartE2EDuration="53.181166984s" podCreationTimestamp="2026-02-02 00:12:10 +0000 UTC" firstStartedPulling="2026-02-02 00:12:13.330332427 +0000 UTC m=+132.605829357" lastFinishedPulling="2026-02-02 00:12:55.960169215 +0000 UTC m=+175.235666155" observedRunningTime="2026-02-02 00:13:03.178056149 +0000 UTC m=+182.453553099" watchObservedRunningTime="2026-02-02 00:13:03.181166984 +0000 UTC m=+182.456663914" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.191578 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.193000 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.450323 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-cp5z2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.450429 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-cp5z2" podUID="07d89198-8b8e-4edc-96b8-05b6df5194f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.612200 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:03 crc kubenswrapper[5108]: I0202 00:13:03.612730 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.077907 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.077961 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:04 crc kubenswrapper[5108]: I0202 00:13:04.674194 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4h5k" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:04 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:04 crc kubenswrapper[5108]: > Feb 02 00:13:05 crc kubenswrapper[5108]: I0202 00:13:05.123396 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwwt9" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" probeResult="failure" output=< Feb 02 00:13:05 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 02 00:13:05 crc kubenswrapper[5108]: > Feb 02 00:13:07 crc kubenswrapper[5108]: I0202 00:13:07.204368 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:07 crc kubenswrapper[5108]: I0202 00:13:07.204989 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pv288" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" containerID="cri-o://f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" gracePeriod=2 Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.183485 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerID="f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" exitCode=0 Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.183630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6"} Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.211396 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-cp5z2" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.395065 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.447327 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.600041 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.600105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.647652 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.807306 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.807705 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.862212 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.883414 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.976970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") pod \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\" (UID: \"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa\") " Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.980011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities" (OuterVolumeSpecName: "utilities") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.992486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d" (OuterVolumeSpecName: "kube-api-access-rmr8d") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "kube-api-access-rmr8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:10 crc kubenswrapper[5108]: I0202 00:13:10.992886 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" (UID: "2c75ea2b-3f96-47c6-a70b-ef520d82a3fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078425 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078478 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.078502 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmr8d\" (UniqueName: \"kubernetes.io/projected/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa-kube-api-access-rmr8d\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.198960 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pv288" event={"ID":"2c75ea2b-3f96-47c6-a70b-ef520d82a3fa","Type":"ContainerDied","Data":"a1c222f8566d6eeedc3932944e3dca34068066d180f7b69bf128f26076481b1b"} Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.199053 5108 scope.go:117] "RemoveContainer" containerID="f69389c32201712636c553d4608b07ef227f9bb8555914fc6850f406b4363fe6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.199330 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pv288" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.212101 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.265319 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.265848 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.269034 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.272488 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pv288"] Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.281838 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.338958 5108 scope.go:117] "RemoveContainer" containerID="04829b5f755d429edab97e4438b063d5bde6a76582a91c95f9ffc7a26e491127" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.362393 5108 scope.go:117] "RemoveContainer" containerID="cf5c6a2438aea906e6d82a2f7c0400d982272ffc4bbb055c232a1e2fffedf93d" Feb 02 00:13:11 crc kubenswrapper[5108]: I0202 00:13:11.569677 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" path="/var/lib/kubelet/pods/2c75ea2b-3f96-47c6-a70b-ef520d82a3fa/volumes" Feb 02 00:13:12 crc kubenswrapper[5108]: I0202 00:13:12.005161 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.008420 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.212811 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9ss2j" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" containerID="cri-o://a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" gracePeriod=2 Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.212917 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgmw6" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" containerID="cri-o://491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" gracePeriod=2 Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.723190 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:13 crc kubenswrapper[5108]: I0202 00:13:13.774492 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:13:14 crc kubenswrapper[5108]: I0202 00:13:14.150105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:14 crc kubenswrapper[5108]: I0202 00:13:14.263335 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:15 crc kubenswrapper[5108]: I0202 00:13:15.232527 5108 generic.go:358] "Generic (PLEG): container finished" podID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerID="a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" exitCode=0 Feb 02 00:13:15 crc kubenswrapper[5108]: I0202 00:13:15.232639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.225160 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244050 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ss2j" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244080 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ss2j" event={"ID":"fa0ae7f1-2fcb-48e2-9553-1144cc082b96","Type":"ContainerDied","Data":"bf1f4e8893cf7d38c33c0c17e67ab9bd9445bacbc6cedb29875eaf455b2ef485"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.244144 5108 scope.go:117] "RemoveContainer" containerID="a863dcfbfd0957bb6d04ba9b952871d33c859aed1552b5491529a2c3d101a795" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.250044 5108 generic.go:358] "Generic (PLEG): container finished" podID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerID="491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" exitCode=0 Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.250093 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d"} Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.311934 5108 scope.go:117] "RemoveContainer" containerID="dbd274483dff3718d495129bfcddb0bed6e580e217c4193576318ad2011f04ba" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.315780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.315922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.316114 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") pod \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\" (UID: \"fa0ae7f1-2fcb-48e2-9553-1144cc082b96\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.323173 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg" (OuterVolumeSpecName: "kube-api-access-dmjzg") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "kube-api-access-dmjzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.329178 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities" (OuterVolumeSpecName: "utilities") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.332147 5108 scope.go:117] "RemoveContainer" containerID="dc6f982b2d56c1abb172d98e66aa0c15b24571bc47876df35d5985b98e039d3c" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.363083 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0ae7f1-2fcb-48e2-9553-1144cc082b96" (UID: "fa0ae7f1-2fcb-48e2-9553-1144cc082b96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417403 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417438 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmjzg\" (UniqueName: \"kubernetes.io/projected/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-kube-api-access-dmjzg\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.417447 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0ae7f1-2fcb-48e2-9553-1144cc082b96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.440271 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.518754 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") pod \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\" (UID: \"41859985-fc1d-4d4e-bbe8-b0a99955ac0a\") " Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.519625 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities" (OuterVolumeSpecName: "utilities") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.523700 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f" (OuterVolumeSpecName: "kube-api-access-dwm9f") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "kube-api-access-dwm9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.568059 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41859985-fc1d-4d4e-bbe8-b0a99955ac0a" (UID: "41859985-fc1d-4d4e-bbe8-b0a99955ac0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.577461 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.590753 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9ss2j"] Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.621935 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.621978 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwm9f\" (UniqueName: \"kubernetes.io/projected/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-kube-api-access-dwm9f\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:16 crc kubenswrapper[5108]: I0202 00:13:16.622019 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41859985-fc1d-4d4e-bbe8-b0a99955ac0a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.289878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgmw6" event={"ID":"41859985-fc1d-4d4e-bbe8-b0a99955ac0a","Type":"ContainerDied","Data":"6f0c7fb95227a7df0062f6ca54786e7bc1b0d3aad99b375a28cf44d515d2f1be"} Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.289970 5108 scope.go:117] "RemoveContainer" containerID="491616bba6f580cdfcad1db207711f26c90dd6c13b2aeba8831681ffd74d9b1d" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.290276 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgmw6" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.319046 5108 scope.go:117] "RemoveContainer" containerID="577ed71913c5b73811c39461c442deeaa9df5e912b98fd354ac4ff80e8d37c9d" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.354806 5108 scope.go:117] "RemoveContainer" containerID="b91c60dbd115b4b7905f65ba4aae50ffb73107e888d42e0249b2d0b2231508b8" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.358536 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.362315 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgmw6"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.567090 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" path="/var/lib/kubelet/pods/41859985-fc1d-4d4e-bbe8-b0a99955ac0a/volumes" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.569044 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" path="/var/lib/kubelet/pods/fa0ae7f1-2fcb-48e2-9553-1144cc082b96/volumes" Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.806475 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:17 crc kubenswrapper[5108]: I0202 00:13:17.806771 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwwt9" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" containerID="cri-o://4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" gracePeriod=2 Feb 02 00:13:18 crc kubenswrapper[5108]: I0202 00:13:18.968055 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.061947 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.062018 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.062077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") pod \"dfe89a3e-59b8-4707-863b-ed23bea6f273\" (UID: \"dfe89a3e-59b8-4707-863b-ed23bea6f273\") " Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.064456 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities" (OuterVolumeSpecName: "utilities") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.076433 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28" (OuterVolumeSpecName: "kube-api-access-ghx28") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "kube-api-access-ghx28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.164420 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghx28\" (UniqueName: \"kubernetes.io/projected/dfe89a3e-59b8-4707-863b-ed23bea6f273-kube-api-access-ghx28\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.164499 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.217028 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfe89a3e-59b8-4707-863b-ed23bea6f273" (UID: "dfe89a3e-59b8-4707-863b-ed23bea6f273"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.267046 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe89a3e-59b8-4707-863b-ed23bea6f273-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316049 5108 generic.go:358] "Generic (PLEG): container finished" podID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" exitCode=0 Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316170 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316268 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwwt9" event={"ID":"dfe89a3e-59b8-4707-863b-ed23bea6f273","Type":"ContainerDied","Data":"1d76080a17da74a3f5f557cd80381d1dd1a2baeca402f2c1f50f111d9dcbf48c"} Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316270 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwwt9" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.316297 5108 scope.go:117] "RemoveContainer" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.369371 5108 scope.go:117] "RemoveContainer" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.382767 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.386906 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwwt9"] Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.417394 5108 scope.go:117] "RemoveContainer" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.448007 5108 scope.go:117] "RemoveContainer" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.450721 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": container with ID starting with 4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc not found: ID does not exist" containerID="4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.450876 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc"} err="failed to get container status \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": rpc error: code = NotFound desc = could not find container \"4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc\": container with ID starting with 4b338b3f4df78d00f252f9447b77d288361781f4e642c0ade962c7bf4a7832bc not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.450973 5108 scope.go:117] "RemoveContainer" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.451910 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": container with ID starting with bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a not found: ID does not exist" containerID="bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452019 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a"} err="failed to get container status \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": rpc error: code = NotFound desc = could not find container \"bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a\": container with ID starting with bba0560574f73eec1d60de449632b2dc8d3a3440a2b0153fef5cbe7ef666f65a not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452102 5108 scope.go:117] "RemoveContainer" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: E0202 00:13:19.452934 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": container with ID starting with 0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159 not found: ID does not exist" containerID="0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.452980 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159"} err="failed to get container status \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": rpc error: code = NotFound desc = could not find container \"0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159\": container with ID starting with 0b459a10fadacde706828eec18857607c3bf0d9dbe99f37a40a6ceaa6747e159 not found: ID does not exist" Feb 02 00:13:19 crc kubenswrapper[5108]: I0202 00:13:19.574313 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" path="/var/lib/kubelet/pods/dfe89a3e-59b8-4707-863b-ed23bea6f273/volumes" Feb 02 00:13:27 crc kubenswrapper[5108]: I0202 00:13:27.956399 5108 ???:1] "http: TLS handshake error from 192.168.126.11:48570: no serving certificate available for the kubelet" Feb 02 00:13:30 crc kubenswrapper[5108]: I0202 00:13:30.311942 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 02 00:13:34 crc kubenswrapper[5108]: I0202 00:13:34.772977 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m"] Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.526051 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.527895 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528072 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528202 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528357 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528494 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528609 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528722 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528849 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.528972 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529084 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529200 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529359 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529481 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529602 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529785 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.529908 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530027 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530157 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530389 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530522 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530643 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.530874 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531027 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531144 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531302 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531435 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-utilities" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531567 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531677 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.531788 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532003 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="extract-content" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532342 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0c4e3b-102b-4208-9aea-f2c48cf52ac0" containerName="pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532497 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0ae7f1-2fcb-48e2-9553-1144cc082b96" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532621 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef4d28dd-0b7f-4abe-8cf3-b8fbf3ee632b" containerName="kube-multus-additional-cni-plugins" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532739 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="41859985-fc1d-4d4e-bbe8-b0a99955ac0a" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532856 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dcbaa597-5b18-4219-b757-5f10e86a2c1c" containerName="image-pruner" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.532973 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe89a3e-59b8-4707-863b-ed23bea6f273" containerName="registry-server" Feb 02 00:13:35 crc kubenswrapper[5108]: I0202 00:13:35.533097 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c75ea2b-3f96-47c6-a70b-ef520d82a3fa" containerName="registry-server" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.086207 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.098946 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.099182 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.099986 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100264 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100045 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100108 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.100034 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" gracePeriod=15 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.103801 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.103958 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104308 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104554 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104753 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.104960 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105079 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105386 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105597 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105712 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105863 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.105983 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106097 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106208 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106365 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106474 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106593 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.106698 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108661 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108807 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.108948 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109076 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109196 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109366 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109575 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.109696 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110001 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110141 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.110482 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130425 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130581 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.130844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232346 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232546 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232618 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232635 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232675 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232726 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.232815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.236332 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.236896 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.237375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.333686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.333868 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334394 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.334761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335174 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335303 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.335621 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.381407 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.427312 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.428731 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429414 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429449 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429459 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429467 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" exitCode=2 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.429539 5108 scope.go:117] "RemoveContainer" containerID="c52275b7d2e3de9999219dc743e02704d4eda3c41a5a0f02432e57072f4294b0" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.431663 5108 generic.go:358] "Generic (PLEG): container finished" podID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerID="491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871" exitCode=0 Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.431828 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerDied","Data":"491b9dc33be340ea8ece574e78c47522d583627c53b52c926c6593004894e871"} Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433019 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433371 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"7ebfc3060fac9640de69ae937ab85bafacacb465f0f768c08164103023429070"} Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.433554 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.665146 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 02 00:13:36 crc kubenswrapper[5108]: I0202 00:13:36.665245 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 02 00:13:36 crc kubenswrapper[5108]: E0202 00:13:36.674669 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.441499 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.445304 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454"} Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.445691 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.446183 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.446552 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: E0202 00:13:37.447218 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.677963 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.678624 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753825 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753972 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.753992 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock" (OuterVolumeSpecName: "var-lock") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754072 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") pod \"baa9da1f-16dc-411f-8968-783a0e3d1efd\" (UID: \"baa9da1f-16dc-411f-8968-783a0e3d1efd\") " Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754175 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754854 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.754892 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa9da1f-16dc-411f-8968-783a0e3d1efd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.760556 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "baa9da1f-16dc-411f-8968-783a0e3d1efd" (UID: "baa9da1f-16dc-411f-8968-783a0e3d1efd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:13:37 crc kubenswrapper[5108]: I0202 00:13:37.856445 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa9da1f-16dc-411f-8968-783a0e3d1efd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454216 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"baa9da1f-16dc-411f-8968-783a0e3d1efd","Type":"ContainerDied","Data":"963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6"} Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454911 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963c03dd266c5096ab10583ebcc3deeb02b48308e6dbedbd6e48c0e23e5a63d6" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454385 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.454370 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: E0202 00:13:38.456076 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:13:38 crc kubenswrapper[5108]: I0202 00:13:38.472407 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.074202 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.075367 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.075956 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.076243 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177490 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177834 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177841 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.177863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178152 5108 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178176 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178188 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.178472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.179487 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.279542 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.279572 5108 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.463556 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464155 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" exitCode=0 Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464338 5108 scope.go:117] "RemoveContainer" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.464457 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.482095 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.482605 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.490078 5108 scope.go:117] "RemoveContainer" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.506924 5108 scope.go:117] "RemoveContainer" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.521014 5108 scope.go:117] "RemoveContainer" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.536378 5108 scope.go:117] "RemoveContainer" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.555830 5108 scope.go:117] "RemoveContainer" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.566774 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.623395 5108 scope.go:117] "RemoveContainer" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.624356 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": container with ID starting with 2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df not found: ID does not exist" containerID="2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624401 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df"} err="failed to get container status \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": rpc error: code = NotFound desc = could not find container \"2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df\": container with ID starting with 2059372f72d2c806796d55e8f8b2578389d4c3e0ad5759b0971d40a59eab72df not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624430 5108 scope.go:117] "RemoveContainer" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.624833 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": container with ID starting with d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022 not found: ID does not exist" containerID="d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624893 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022"} err="failed to get container status \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": rpc error: code = NotFound desc = could not find container \"d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022\": container with ID starting with d5f03eccfcbfd5dcc0d8f5377d9a990417bfdf07f50ffc90e03da466e93bd022 not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.624932 5108 scope.go:117] "RemoveContainer" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.625775 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": container with ID starting with ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448 not found: ID does not exist" containerID="ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.625810 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448"} err="failed to get container status \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": rpc error: code = NotFound desc = could not find container \"ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448\": container with ID starting with ae4e66dd7e3279a506e218512b616dfbfcb250c2379e1f897ef8dc98808b4448 not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.625837 5108 scope.go:117] "RemoveContainer" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.626280 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": container with ID starting with 3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb not found: ID does not exist" containerID="3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626308 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb"} err="failed to get container status \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": rpc error: code = NotFound desc = could not find container \"3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb\": container with ID starting with 3d46b00787ba261cbec1de0b22278a57fd36d2971ffd878301be04fc606fdcbb not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626327 5108 scope.go:117] "RemoveContainer" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.626596 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": container with ID starting with f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb not found: ID does not exist" containerID="f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626624 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb"} err="failed to get container status \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": rpc error: code = NotFound desc = could not find container \"f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb\": container with ID starting with f45977c5797e4ee01e82ce06d9d38d64e72387baa644a4497ebd0e18022b2bbb not found: ID does not exist" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.626641 5108 scope.go:117] "RemoveContainer" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: E0202 00:13:39.627054 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": container with ID starting with f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2 not found: ID does not exist" containerID="f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2" Feb 02 00:13:39 crc kubenswrapper[5108]: I0202 00:13:39.627086 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2"} err="failed to get container status \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": rpc error: code = NotFound desc = could not find container \"f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2\": container with ID starting with f2f7cb10e6ec4c854e368a2f27e276815513e2f7f45841e903390599000330a2 not found: ID does not exist" Feb 02 00:13:41 crc kubenswrapper[5108]: I0202 00:13:41.561404 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.238245 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.238998 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239191 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239459 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.239740 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.239771 5108 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.240180 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.441544 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.557087 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.558167 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.571753 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.571785 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.572123 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: I0202 00:13:46.572402 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:46 crc kubenswrapper[5108]: W0202 00:13:46.594580 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba WatchSource:0}: Error finding container 3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba: Status 404 returned error can't find the container with id 3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.676283 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189045a22bab678b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,LastTimestamp:2026-02-02 00:13:36.376276875 +0000 UTC m=+215.651773805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 00:13:46 crc kubenswrapper[5108]: E0202 00:13:46.841972 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.512932 5108 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="dac8f2ddfdc264820f0cd3ef205bc5581d02f2a8a465372a19db14b35634b955" exitCode=0 Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"dac8f2ddfdc264820f0cd3ef205bc5581d02f2a8a465372a19db14b35634b955"} Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3ba4ac9dfdb1b77e559293942e461734a57491dc89becac056b2cf31aa5c10ba"} Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513676 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.513704 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:47 crc kubenswrapper[5108]: E0202 00:13:47.515056 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:47 crc kubenswrapper[5108]: I0202 00:13:47.515697 5108 status_manager.go:895] "Failed to get status for pod" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 02 00:13:47 crc kubenswrapper[5108]: E0202 00:13:47.644684 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529455 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c9a7dc95c0f7f9b83b6e5d752d2720b6307e2eda3e9e6ea2b1d68073e3fb0915"} Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529833 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"39469cde5ec0c8d8b15790c95f4c449cccd35906116e1dc7076f1bc0c83e2eab"} Feb 02 00:13:48 crc kubenswrapper[5108]: I0202 00:13:48.529845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"98bb12bf0e05d03b24d7490a26732ed32cd9b9185c2fcd0ce8a8d9fb849d4625"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539438 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e977c8b1685dfdee63acf55f866dc21dc07f705c8810ccbdd3349085e9469d2f"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"83ff388bf90fa95675be6228bb2c49cd302d4f9170ee529be0b002ec0d3cf05a"} Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539895 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539933 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:49 crc kubenswrapper[5108]: I0202 00:13:49.539967 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:50 crc kubenswrapper[5108]: I0202 00:13:50.920158 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:13:50 crc kubenswrapper[5108]: I0202 00:13:50.920301 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.562747 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.562832 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca" exitCode=1 Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.565780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca"} Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.566604 5108 scope.go:117] "RemoveContainer" containerID="88017323fd1c2648bba882a61fc679745f3c43c51cbbbe785c9b96c76501c4ca" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.573045 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.573094 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:51 crc kubenswrapper[5108]: I0202 00:13:51.581529 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:52 crc kubenswrapper[5108]: I0202 00:13:52.575779 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:13:52 crc kubenswrapper[5108]: I0202 00:13:52.576171 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"bb6fb08ab6c4d00166a440141f5cb57ca69ba366f1f91b9a802c4c4dca7cdbd8"} Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.549556 5108 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.549881 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.588992 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.589026 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.593719 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:13:54 crc kubenswrapper[5108]: I0202 00:13:54.616048 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="719b3a77-5020-470c-bf5f-ad05197649a8" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.594701 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.594729 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:13:55 crc kubenswrapper[5108]: I0202 00:13:55.598398 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="719b3a77-5020-470c-bf5f-ad05197649a8" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.176888 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.183150 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:57 crc kubenswrapper[5108]: I0202 00:13:57.605746 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:13:59 crc kubenswrapper[5108]: I0202 00:13:59.805485 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" containerID="cri-o://83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" gracePeriod=15 Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.280807 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386177 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386331 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386416 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386469 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386521 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386634 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386666 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386768 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.386820 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") pod \"03927a55-b629-4f9c-be0f-3499aba5b90e\" (UID: \"03927a55-b629-4f9c-be0f-3499aba5b90e\") " Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.387794 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388504 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.388835 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.389209 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.393613 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.406052 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45" (OuterVolumeSpecName: "kube-api-access-8gz45") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "kube-api-access-8gz45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.406285 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.407972 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.408292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.408867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412350 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412638 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.412757 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03927a55-b629-4f9c-be0f-3499aba5b90e" (UID: "03927a55-b629-4f9c-be0f-3499aba5b90e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487846 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487893 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487909 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487923 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487955 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487966 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487977 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.487990 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488006 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488016 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488028 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488039 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03927a55-b629-4f9c-be0f-3499aba5b90e-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488049 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03927a55-b629-4f9c-be0f-3499aba5b90e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.488061 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gz45\" (UniqueName: \"kubernetes.io/projected/03927a55-b629-4f9c-be0f-3499aba5b90e-kube-api-access-8gz45\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.620871 5108 generic.go:358] "Generic (PLEG): container finished" podID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" exitCode=0 Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.620980 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerDied","Data":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621017 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" event={"ID":"03927a55-b629-4f9c-be0f-3499aba5b90e","Type":"ContainerDied","Data":"ab4178c0f93978aa03540a620121f5f5624450b66655822381ed4a7581fad072"} Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621017 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-4lq2m" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.621039 5108 scope.go:117] "RemoveContainer" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.649540 5108 scope.go:117] "RemoveContainer" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: E0202 00:14:00.650105 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": container with ID starting with 83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826 not found: ID does not exist" containerID="83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.650148 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826"} err="failed to get container status \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": rpc error: code = NotFound desc = could not find container \"83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826\": container with ID starting with 83a1fb271e036cb23b3646758d3a77e625b0d188a2eaa398e70be1daa3bc0826 not found: ID does not exist" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.689646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.833844 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 02 00:14:00 crc kubenswrapper[5108]: I0202 00:14:00.849116 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.174042 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.831470 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 02 00:14:01 crc kubenswrapper[5108]: I0202 00:14:01.834541 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.255336 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.271822 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.409596 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:02 crc kubenswrapper[5108]: I0202 00:14:02.970633 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 02 00:14:03 crc kubenswrapper[5108]: I0202 00:14:03.554499 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 02 00:14:03 crc kubenswrapper[5108]: I0202 00:14:03.749690 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.002970 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.308167 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.378257 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 02 00:14:04 crc kubenswrapper[5108]: I0202 00:14:04.834042 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.336131 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.359854 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.412163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.434053 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 02 00:14:05 crc kubenswrapper[5108]: I0202 00:14:05.505758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.363381 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.389528 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 02 00:14:06 crc kubenswrapper[5108]: I0202 00:14:06.855075 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.222976 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.625917 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 02 00:14:07 crc kubenswrapper[5108]: I0202 00:14:07.688135 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.308855 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.615171 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.617835 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:08 crc kubenswrapper[5108]: I0202 00:14:08.876164 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.135166 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.293705 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.295285 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.513221 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.624653 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.631639 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.648304 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.728604 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.731927 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 02 00:14:09 crc kubenswrapper[5108]: I0202 00:14:09.963936 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.031293 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.173779 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.194479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.452840 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.484779 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.492736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.651683 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.681843 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.695497 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.722993 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.726474 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.880561 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 02 00:14:10 crc kubenswrapper[5108]: I0202 00:14:10.935631 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.064641 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.065281 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.112796 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.125944 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.140007 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.247636 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.248524 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.301638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.345016 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.397816 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.507994 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.573252 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.573433 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.584100 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.631004 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.639357 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.813037 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.838985 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.899677 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 02 00:14:11 crc kubenswrapper[5108]: I0202 00:14:11.997650 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.064176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.183699 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.263764 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.303030 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.320986 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.322756 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.372485 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.479462 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.562674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.711526 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.806410 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.835325 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.836031 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.862220 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.898223 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.945242 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 02 00:14:12 crc kubenswrapper[5108]: I0202 00:14:12.975244 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.036456 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.050978 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.128024 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.191628 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.199715 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.266689 5108 ???:1] "http: TLS handshake error from 192.168.126.11:54534: no serving certificate available for the kubelet" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.288380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.438553 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.451531 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.453788 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.475710 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.563934 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.634013 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.801557 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.803803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.826068 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.838615 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.858731 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.881821 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.896394 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:13 crc kubenswrapper[5108]: I0202 00:14:13.924371 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.110503 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.147435 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.152810 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-4lq2m","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.152885 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153376 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153409 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6045b615-dcb1-429a-b2f5-90320b248abd" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153544 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153564 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153576 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153582 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153705 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="baa9da1f-16dc-411f-8968-783a0e3d1efd" containerName="installer" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.153723 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" containerName="oauth-openshift" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.203566 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.203740 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.207378 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.207929 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208645 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208773 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208826 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.208844 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.209072 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.210569 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.211071 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.211848 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212381 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.212847 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.213778 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.218408 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.226163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.229416 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.229397396 podStartE2EDuration="20.229397396s" podCreationTimestamp="2026-02-02 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:14.227122024 +0000 UTC m=+253.502618964" watchObservedRunningTime="2026-02-02 00:14:14.229397396 +0000 UTC m=+253.504894326" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.235491 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.249896 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.278508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298743 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298894 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298937 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.298981 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299000 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.299092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400099 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400259 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400291 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400382 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400483 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400521 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400612 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.400638 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.401150 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.401195 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-dir\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.402112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-audit-policies\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.402276 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-service-ca\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.403066 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.407847 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.407905 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.408310 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-login\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.409002 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.410161 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.410817 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-user-template-error\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.411118 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-router-certs\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.413207 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-v4-0-config-system-session\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.424534 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47dq\" (UniqueName: \"kubernetes.io/projected/7e6a5122-2dba-4b6d-93a5-734a6f188f7d-kube-api-access-n47dq\") pod \"oauth-openshift-967dcd4bb-8x5dz\" (UID: \"7e6a5122-2dba-4b6d-93a5-734a6f188f7d\") " pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.424581 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.468019 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.524430 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.561290 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.612369 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.797677 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.840292 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.845078 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.864795 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.876726 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.915985 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 02 00:14:14 crc kubenswrapper[5108]: I0202 00:14:14.971868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.095730 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.213137 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.229516 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.307400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.411115 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.486523 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.509306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.565147 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03927a55-b629-4f9c-be0f-3499aba5b90e" path="/var/lib/kubelet/pods/03927a55-b629-4f9c-be0f-3499aba5b90e/volumes" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.642798 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.671650 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.680265 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.768137 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.808524 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.837353 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.867941 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.905741 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.922124 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 02 00:14:15 crc kubenswrapper[5108]: I0202 00:14:15.937268 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.092078 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.095346 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.232759 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.259344 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.288521 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.292662 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.325122 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.366400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.407975 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.580899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.635854 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.645611 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.712873 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.783492 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.802464 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.813386 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.930588 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.930881 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" gracePeriod=5 Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.934652 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.943572 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.977070 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 02 00:14:16 crc kubenswrapper[5108]: I0202 00:14:16.997451 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.005674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.023004 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.057597 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.155539 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.171714 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.196129 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.331147 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.450193 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.513960 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.514080 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.579678 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.604311 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.823771 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.860818 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.908957 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:17 crc kubenswrapper[5108]: I0202 00:14:17.972904 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.097602 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.104776 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.261208 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.401084 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.511391 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.613840 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.641356 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.653765 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.725815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.795355 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.834940 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 02 00:14:18 crc kubenswrapper[5108]: I0202 00:14:18.940721 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.027362 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.204918 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.226897 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.290437 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.332118 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.470282 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.477531 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.640970 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.645102 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.663753 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.704960 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.739783 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.865004 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 02 00:14:19 crc kubenswrapper[5108]: I0202 00:14:19.890574 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.019118 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.115465 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.207432 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.375764 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.385016 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.435055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.547999 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.632551 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.681685 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.755766 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.917894 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.919785 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.919871 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.953837 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 02 00:14:20 crc kubenswrapper[5108]: I0202 00:14:20.972867 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.099305 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.181755 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.262446 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.285400 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.402790 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.739648 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 02 00:14:21 crc kubenswrapper[5108]: I0202 00:14:21.825388 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.071733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.135726 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.226385 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.253962 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.283985 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.315380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.437870 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.512934 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.536442 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.536536 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.542063 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641328 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641368 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641391 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641467 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641503 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641477 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641492 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641859 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641872 5108 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641882 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.641890 5108 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.648246 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.650842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.669313 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.743125 5108 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.814160 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836794 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836847 5108 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" exitCode=137 Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836913 5108 scope.go:117] "RemoveContainer" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.836963 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.852600 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.865092 5108 scope.go:117] "RemoveContainer" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: E0202 00:14:22.865903 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": container with ID starting with 217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454 not found: ID does not exist" containerID="217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.865938 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454"} err="failed to get container status \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": rpc error: code = NotFound desc = could not find container \"217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454\": container with ID starting with 217e702255bff8edf059854fa080bb87ca29968a037ab097a6d4246405c82454 not found: ID does not exist" Feb 02 00:14:22 crc kubenswrapper[5108]: I0202 00:14:22.940718 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.366417 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz"] Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.569730 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.631847 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-967dcd4bb-8x5dz"] Feb 02 00:14:23 crc kubenswrapper[5108]: I0202 00:14:23.844520 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" event={"ID":"7e6a5122-2dba-4b6d-93a5-734a6f188f7d","Type":"ContainerStarted","Data":"fadcd02b8178ba6c927c1e14a19f08cae37accd4ceb3ffc44455722ae13a67df"} Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.269087 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.853693 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" event={"ID":"7e6a5122-2dba-4b6d-93a5-734a6f188f7d","Type":"ContainerStarted","Data":"ba4a2094fadafcb6a6db42d22900f7e11c02ae09f9387cfbd516df4f873e920d"} Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.855353 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.864021 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" Feb 02 00:14:24 crc kubenswrapper[5108]: I0202 00:14:24.883319 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-967dcd4bb-8x5dz" podStartSLOduration=50.883293817 podStartE2EDuration="50.883293817s" podCreationTimestamp="2026-02-02 00:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:24.880668506 +0000 UTC m=+264.156165516" watchObservedRunningTime="2026-02-02 00:14:24.883293817 +0000 UTC m=+264.158790777" Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.972088 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f60e56b-3881-49ee-be41-5435327c1be3" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" exitCode=0 Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.972179 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a"} Feb 02 00:14:39 crc kubenswrapper[5108]: I0202 00:14:39.973183 5108 scope.go:117] "RemoveContainer" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.986815 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerStarted","Data":"5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde"} Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.987848 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:14:40 crc kubenswrapper[5108]: I0202 00:14:40.988661 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.263418 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.263668 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" containerID="cri-o://675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" gracePeriod=30 Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.286364 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.286946 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" containerID="cri-o://e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" gracePeriod=30 Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.816841 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.818987 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.849782 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850349 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850368 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850381 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850387 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850395 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850401 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850493 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerName="route-controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850503 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerName="controller-manager" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.850512 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.868490 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.874194 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.878262 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.888170 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.888407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940578 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940777 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940830 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940858 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") pod \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\" (UID: \"c6bb9533-ef42-4cf1-92de-3a011b1934b8\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940915 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940948 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.940995 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941016 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") pod \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\" (UID: \"ebaf16ae-d4df-42da-a1b5-03495d1ef713\") " Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941652 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp" (OuterVolumeSpecName: "tmp") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp" (OuterVolumeSpecName: "tmp") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941945 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942079 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca" (OuterVolumeSpecName: "client-ca") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.941159 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.942867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943077 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943108 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943161 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943183 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943241 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943283 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943316 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943325 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6bb9533-ef42-4cf1-92de-3a011b1934b8-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943335 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943345 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ebaf16ae-d4df-42da-a1b5-03495d1ef713-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943354 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.943761 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config" (OuterVolumeSpecName: "config") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.955423 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.961333 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.963205 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4" (OuterVolumeSpecName: "kube-api-access-572g4") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "kube-api-access-572g4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.966615 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config" (OuterVolumeSpecName: "config") pod "ebaf16ae-d4df-42da-a1b5-03495d1ef713" (UID: "ebaf16ae-d4df-42da-a1b5-03495d1ef713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:14:42 crc kubenswrapper[5108]: I0202 00:14:42.967641 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d" (OuterVolumeSpecName: "kube-api-access-tfk4d") pod "c6bb9533-ef42-4cf1-92de-3a011b1934b8" (UID: "c6bb9533-ef42-4cf1-92de-3a011b1934b8"). InnerVolumeSpecName "kube-api-access-tfk4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000124 5108 generic.go:358] "Generic (PLEG): container finished" podID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" exitCode=0 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000256 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000285 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerDied","Data":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000339 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fc5pz" event={"ID":"ebaf16ae-d4df-42da-a1b5-03495d1ef713","Type":"ContainerDied","Data":"3158eaa8cced5445a37b12560efe834d0b215f5c202cf0145f728d9c8aaa5068"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.000362 5108 scope.go:117] "RemoveContainer" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002268 5108 generic.go:358] "Generic (PLEG): container finished" podID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" exitCode=0 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002379 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerDied","Data":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.002454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv" event={"ID":"c6bb9533-ef42-4cf1-92de-3a011b1934b8","Type":"ContainerDied","Data":"683d5e48d4bbd76223bfa55ebb9faedf8bd6693391a55afaa0790e34cd786995"} Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.035350 5108 scope.go:117] "RemoveContainer" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: E0202 00:14:43.038032 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": container with ID starting with 675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686 not found: ID does not exist" containerID="675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.038063 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686"} err="failed to get container status \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": rpc error: code = NotFound desc = could not find container \"675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686\": container with ID starting with 675617ae0086e9184dd82d2544676e588f328e5205ee1bf08a42c745790c5686 not found: ID does not exist" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.038086 5108 scope.go:117] "RemoveContainer" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055734 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055946 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.055973 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056032 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056067 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056252 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebaf16ae-d4df-42da-a1b5-03495d1ef713-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056263 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf16ae-d4df-42da-a1b5-03495d1ef713-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056275 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6bb9533-ef42-4cf1-92de-3a011b1934b8-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056288 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tfk4d\" (UniqueName: \"kubernetes.io/projected/c6bb9533-ef42-4cf1-92de-3a011b1934b8-kube-api-access-tfk4d\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-572g4\" (UniqueName: \"kubernetes.io/projected/ebaf16ae-d4df-42da-a1b5-03495d1ef713-kube-api-access-572g4\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.056307 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6bb9533-ef42-4cf1-92de-3a011b1934b8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.057214 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.057645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.058737 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.059463 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.059605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.060602 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.062841 5108 scope.go:117] "RemoveContainer" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: E0202 00:14:43.068340 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": container with ID starting with e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c not found: ID does not exist" containerID="e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.068398 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c"} err="failed to get container status \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": rpc error: code = NotFound desc = could not find container \"e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c\": container with ID starting with e3a6eeae3bb2c04e522cda0b93fc612bb720b63956416a463041ad5d8ca8a24c not found: ID does not exist" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.069290 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.069355 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.070220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.075105 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.081070 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"route-controller-manager-68bfbc78f4-bxsbg\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.082087 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"controller-manager-567446f66d-rb24c\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.087271 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fc5pz"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.095145 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.102317 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-xtqwv"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.185943 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.207345 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.407069 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.468986 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:14:43 crc kubenswrapper[5108]: W0202 00:14:43.478470 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeda4dd1_4f20_4369_bafc_0ac6eb8e8f6b.slice/crio-afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60 WatchSource:0}: Error finding container afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60: Status 404 returned error can't find the container with id afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60 Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.566131 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6bb9533-ef42-4cf1-92de-3a011b1934b8" path="/var/lib/kubelet/pods/c6bb9533-ef42-4cf1-92de-3a011b1934b8/volumes" Feb 02 00:14:43 crc kubenswrapper[5108]: I0202 00:14:43.567297 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaf16ae-d4df-42da-a1b5-03495d1ef713" path="/var/lib/kubelet/pods/ebaf16ae-d4df-42da-a1b5-03495d1ef713/volumes" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010397 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerStarted","Data":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010745 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.010757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerStarted","Data":"58ccd3c5158422578e61b7d7f4b1bdfac6ed4226edc2df1bcf366f305ad50537"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013014 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerStarted","Data":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerStarted","Data":"afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60"} Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.013277 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.031183 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" podStartSLOduration=2.031169672 podStartE2EDuration="2.031169672s" podCreationTimestamp="2026-02-02 00:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:44.029450076 +0000 UTC m=+283.304947016" watchObservedRunningTime="2026-02-02 00:14:44.031169672 +0000 UTC m=+283.306666592" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.056510 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" podStartSLOduration=2.056493359 podStartE2EDuration="2.056493359s" podCreationTimestamp="2026-02-02 00:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:44.05390334 +0000 UTC m=+283.329400290" watchObservedRunningTime="2026-02-02 00:14:44.056493359 +0000 UTC m=+283.331990289" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.486637 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:14:44 crc kubenswrapper[5108]: I0202 00:14:44.622385 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.672628 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.788782 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.788953 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.941490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942779 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942800 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942840 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942873 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.942903 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:48 crc kubenswrapper[5108]: I0202 00:14:48.964587 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.043965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044019 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.044211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.045504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b620522-8e7c-4ff5-b88f-658a64778055-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.046062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-trusted-ca\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.046551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b620522-8e7c-4ff5-b88f-658a64778055-registry-certificates\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.051984 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-registry-tls\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.053379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b620522-8e7c-4ff5-b88f-658a64778055-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.064180 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26zw\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-kube-api-access-x26zw\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.074030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b620522-8e7c-4ff5-b88f-658a64778055-bound-sa-token\") pod \"image-registry-5d9d95bf5b-gr7jw\" (UID: \"2b620522-8e7c-4ff5-b88f-658a64778055\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.112213 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.309156 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-gr7jw"] Feb 02 00:14:49 crc kubenswrapper[5108]: I0202 00:14:49.906840 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56384: no serving certificate available for the kubelet" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.063805 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" event={"ID":"2b620522-8e7c-4ff5-b88f-658a64778055","Type":"ContainerStarted","Data":"303ec1f9caf3151304e6616bbfad983b04e1c158e69967056463655a668a4260"} Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.063857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" event={"ID":"2b620522-8e7c-4ff5-b88f-658a64778055","Type":"ContainerStarted","Data":"fd0cb1e01a3d1efcdad86229ce823d9f2a11d654fb84184416fde311614bf895"} Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.064132 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.081486 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" podStartSLOduration=2.081466981 podStartE2EDuration="2.081466981s" podCreationTimestamp="2026-02-02 00:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:14:50.077778563 +0000 UTC m=+289.353275513" watchObservedRunningTime="2026-02-02 00:14:50.081466981 +0000 UTC m=+289.356963911" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.919861 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.920273 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.920341 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.921075 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:14:50 crc kubenswrapper[5108]: I0202 00:14:50.921213 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" gracePeriod=600 Feb 02 00:14:51 crc kubenswrapper[5108]: I0202 00:14:51.073873 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" exitCode=0 Feb 02 00:14:51 crc kubenswrapper[5108]: I0202 00:14:51.073991 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f"} Feb 02 00:14:52 crc kubenswrapper[5108]: I0202 00:14:52.082636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.169884 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.194499 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.198447 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.200165 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.216381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298677 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298771 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.298858 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.401908 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.401977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.402104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.403948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.414291 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.421769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"collect-profiles-29499855-f84hk\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.526337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:00 crc kubenswrapper[5108]: I0202 00:15:00.928691 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk"] Feb 02 00:15:00 crc kubenswrapper[5108]: W0202 00:15:00.936928 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod108138a6_cd12_40d8_be19_580628ff3407.slice/crio-98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9 WatchSource:0}: Error finding container 98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9: Status 404 returned error can't find the container with id 98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9 Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.157515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerStarted","Data":"98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9"} Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.741880 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:15:01 crc kubenswrapper[5108]: I0202 00:15:01.743329 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.170184 5108 generic.go:358] "Generic (PLEG): container finished" podID="108138a6-cd12-40d8-be19-580628ff3407" containerID="ad8d695e762a2c513b0dc9d2445c1f0ed0b7ba50992f69b8964360c32e2952c9" exitCode=0 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.170447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerDied","Data":"ad8d695e762a2c513b0dc9d2445c1f0ed0b7ba50992f69b8964360c32e2952c9"} Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.269459 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.269885 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" containerID="cri-o://24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" gracePeriod=30 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.293333 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.293771 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" containerID="cri-o://0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" gracePeriod=30 Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.885953 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.916682 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917427 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917447 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.917535 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerName="route-controller-manager" Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.957413 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:02 crc kubenswrapper[5108]: I0202 00:15:02.957597 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040663 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040801 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.040836 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") pod \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\" (UID: \"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041424 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041485 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp" (OuterVolumeSpecName: "tmp") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.041643 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config" (OuterVolumeSpecName: "config") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.049491 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx" (OuterVolumeSpecName: "kube-api-access-4qwjx") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "kube-api-access-4qwjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.052164 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" (UID: "feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.137378 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142170 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142315 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142468 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142635 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qwjx\" (UniqueName: \"kubernetes.io/projected/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-kube-api-access-4qwjx\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142668 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142684 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142698 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.142710 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.174524 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175466 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175493 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.175648 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="79198f63-420b-43d9-b3a1-bf017d820757" containerName="controller-manager" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.179544 5108 generic.go:358] "Generic (PLEG): container finished" podID="79198f63-420b-43d9-b3a1-bf017d820757" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" exitCode=0 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.181425 5108 generic.go:358] "Generic (PLEG): container finished" podID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" exitCode=0 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244426 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244454 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") pod \"79198f63-420b-43d9-b3a1-bf017d820757\" (UID: \"79198f63-420b-43d9-b3a1-bf017d820757\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244622 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244672 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244776 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.244983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp" (OuterVolumeSpecName: "tmp") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245155 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca" (OuterVolumeSpecName: "client-ca") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245512 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.245979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.246186 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config" (OuterVolumeSpecName: "config") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.246232 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.247514 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng" (OuterVolumeSpecName: "kube-api-access-n7tng") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "kube-api-access-n7tng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248367 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248398 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerDied","Data":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248428 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" event={"ID":"79198f63-420b-43d9-b3a1-bf017d820757","Type":"ContainerDied","Data":"58ccd3c5158422578e61b7d7f4b1bdfac6ed4226edc2df1bcf366f305ad50537"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248446 5108 scope.go:117] "RemoveContainer" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248458 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248491 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567446f66d-rb24c" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerDied","Data":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.248671 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg" event={"ID":"feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b","Type":"ContainerDied","Data":"afcac4ec6438b6ba3bf2cfd787ad93083aa7277c7f6047771319ebb5e3cd2d60"} Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.249181 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.249821 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79198f63-420b-43d9-b3a1-bf017d820757" (UID: "79198f63-420b-43d9-b3a1-bf017d820757"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.250826 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.270881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"route-controller-manager-77bcd8cdb5-6vm5g\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.274949 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.288665 5108 scope.go:117] "RemoveContainer" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: E0202 00:15:03.289601 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": container with ID starting with 24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613 not found: ID does not exist" containerID="24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.289650 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613"} err="failed to get container status \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": rpc error: code = NotFound desc = could not find container \"24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613\": container with ID starting with 24c388689b6d18e559e039eac61047620d47d7a4986075885b97c7d70882e613 not found: ID does not exist" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.289685 5108 scope.go:117] "RemoveContainer" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.303434 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.308307 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bfbc78f4-bxsbg"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.324870 5108 scope.go:117] "RemoveContainer" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: E0202 00:15:03.325425 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": container with ID starting with 0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14 not found: ID does not exist" containerID="0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.325493 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14"} err="failed to get container status \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": rpc error: code = NotFound desc = could not find container \"0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14\": container with ID starting with 0814f1cbe8b32cb9f47fc6b6182a1f0532eacaa734b9583b8a5d26b7154f7a14 not found: ID does not exist" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346740 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.346967 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347360 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347616 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/79198f63-420b-43d9-b3a1-bf017d820757-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347636 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347647 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347657 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7tng\" (UniqueName: \"kubernetes.io/projected/79198f63-420b-43d9-b3a1-bf017d820757-kube-api-access-n7tng\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347667 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79198f63-420b-43d9-b3a1-bf017d820757-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.347676 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79198f63-420b-43d9-b3a1-bf017d820757-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449016 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449194 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449221 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.449474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.450971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.456514 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.459564 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.476417 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"controller-manager-6f6bd77fc8-wrqmg\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.503958 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.564454 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b" path="/var/lib/kubelet/pods/feda4dd1-4f20-4369-bafc-0ac6eb8e8f6b/volumes" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.574472 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.577917 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.580998 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-567446f66d-rb24c"] Feb 02 00:15:03 crc kubenswrapper[5108]: W0202 00:15:03.582533 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf79def_e801_4283_9dcf_dc94d07e4ce7.slice/crio-95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415 WatchSource:0}: Error finding container 95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415: Status 404 returned error can't find the container with id 95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415 Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.585452 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.586376 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.653873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.654135 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.654202 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") pod \"108138a6-cd12-40d8-be19-580628ff3407\" (UID: \"108138a6-cd12-40d8-be19-580628ff3407\") " Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.655291 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume" (OuterVolumeSpecName: "config-volume") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.660607 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6" (OuterVolumeSpecName: "kube-api-access-npgw6") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "kube-api-access-npgw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.660697 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "108138a6-cd12-40d8-be19-580628ff3407" (UID: "108138a6-cd12-40d8-be19-580628ff3407"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755848 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/108138a6-cd12-40d8-be19-580628ff3407-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755907 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108138a6-cd12-40d8-be19-580628ff3407-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.755919 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-npgw6\" (UniqueName: \"kubernetes.io/projected/108138a6-cd12-40d8-be19-580628ff3407-kube-api-access-npgw6\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:03 crc kubenswrapper[5108]: I0202 00:15:03.984926 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:03 crc kubenswrapper[5108]: W0202 00:15:03.995003 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e53688_b891_48f3_a8ac_3b2843a5a8bd.slice/crio-68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de WatchSource:0}: Error finding container 68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de: Status 404 returned error can't find the container with id 68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.221760 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.223450 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499855-f84hk" event={"ID":"108138a6-cd12-40d8-be19-580628ff3407","Type":"ContainerDied","Data":"98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.223503 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d6499e8eabc175d98097137368aeeb30eef1a96b9954ece3a0ab1e76e359f9" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.228206 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerStarted","Data":"9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.228408 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerStarted","Data":"95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.229079 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.234129 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerStarted","Data":"68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de"} Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.266326 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" podStartSLOduration=2.266309517 podStartE2EDuration="2.266309517s" podCreationTimestamp="2026-02-02 00:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:04.26418826 +0000 UTC m=+303.539685200" watchObservedRunningTime="2026-02-02 00:15:04.266309517 +0000 UTC m=+303.541806447" Feb 02 00:15:04 crc kubenswrapper[5108]: I0202 00:15:04.606423 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.241469 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerStarted","Data":"5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407"} Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.261842 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" podStartSLOduration=3.261816139 podStartE2EDuration="3.261816139s" podCreationTimestamp="2026-02-02 00:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:05.25771983 +0000 UTC m=+304.533216830" watchObservedRunningTime="2026-02-02 00:15:05.261816139 +0000 UTC m=+304.537313109" Feb 02 00:15:05 crc kubenswrapper[5108]: I0202 00:15:05.569299 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79198f63-420b-43d9-b3a1-bf017d820757" path="/var/lib/kubelet/pods/79198f63-420b-43d9-b3a1-bf017d820757/volumes" Feb 02 00:15:06 crc kubenswrapper[5108]: I0202 00:15:06.249160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:06 crc kubenswrapper[5108]: I0202 00:15:06.257438 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:10 crc kubenswrapper[5108]: I0202 00:15:10.424755 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:15:11 crc kubenswrapper[5108]: I0202 00:15:11.083543 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-gr7jw" Feb 02 00:15:11 crc kubenswrapper[5108]: I0202 00:15:11.145937 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.256912 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.257684 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" containerID="cri-o://5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" gracePeriod=30 Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.282055 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:22 crc kubenswrapper[5108]: I0202 00:15:22.282420 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" containerID="cri-o://9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" gracePeriod=30 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.384814 5108 generic.go:358] "Generic (PLEG): container finished" podID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerID="5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" exitCode=0 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.385525 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerDied","Data":"5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407"} Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.387394 5108 generic.go:358] "Generic (PLEG): container finished" podID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerID="9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" exitCode=0 Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.387429 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerDied","Data":"9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded"} Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.540683 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574139 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574701 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574721 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574743 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.574939 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.575042 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="108138a6-cd12-40d8-be19-580628ff3407" containerName="collect-profiles" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.575055 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" containerName="controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644667 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644697 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644752 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.644815 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") pod \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\" (UID: \"29e53688-b891-48f3-a8ac-3b2843a5a8bd\") " Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp" (OuterVolumeSpecName: "tmp") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646161 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646204 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config" (OuterVolumeSpecName: "config") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.646388 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.654395 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc" (OuterVolumeSpecName: "kube-api-access-tw4fc") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "kube-api-access-tw4fc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.654542 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29e53688-b891-48f3-a8ac-3b2843a5a8bd" (UID: "29e53688-b891-48f3-a8ac-3b2843a5a8bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.720929 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.721245 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746572 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tw4fc\" (UniqueName: \"kubernetes.io/projected/29e53688-b891-48f3-a8ac-3b2843a5a8bd-kube-api-access-tw4fc\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746599 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746612 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e53688-b891-48f3-a8ac-3b2843a5a8bd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746641 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746651 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29e53688-b891-48f3-a8ac-3b2843a5a8bd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.746661 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29e53688-b891-48f3-a8ac-3b2843a5a8bd-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851386 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851445 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851490 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.851561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.872422 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.908958 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910119 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910159 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.910476 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" containerName="route-controller-manager" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952797 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.952915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953463 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.953999 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.954112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.954350 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.955254 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.959023 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.962257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.962404 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:23 crc kubenswrapper[5108]: I0202 00:15:23.968608 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"controller-manager-65678dd567-lql72\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.052629 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056340 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056404 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056430 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056509 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") pod \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\" (UID: \"6bf79def-e801-4283-9dcf-dc94d07e4ce7\") " Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056688 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056739 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.056772 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058036 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp" (OuterVolumeSpecName: "tmp") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058100 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca" (OuterVolumeSpecName: "client-ca") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.058242 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config" (OuterVolumeSpecName: "config") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.061390 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.063898 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd" (OuterVolumeSpecName: "kube-api-access-zbngd") pod "6bf79def-e801-4283-9dcf-dc94d07e4ce7" (UID: "6bf79def-e801-4283-9dcf-dc94d07e4ce7"). InnerVolumeSpecName "kube-api-access-zbngd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.157941 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158003 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158051 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbngd\" (UniqueName: \"kubernetes.io/projected/6bf79def-e801-4283-9dcf-dc94d07e4ce7-kube-api-access-zbngd\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158065 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158080 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf79def-e801-4283-9dcf-dc94d07e4ce7-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158095 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf79def-e801-4283-9dcf-dc94d07e4ce7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.158110 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf79def-e801-4283-9dcf-dc94d07e4ce7-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.159452 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.160113 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.160744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.165584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.177673 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"route-controller-manager-79b98f778c-rmbgx\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.275478 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.397900 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.400599 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg" event={"ID":"29e53688-b891-48f3-a8ac-3b2843a5a8bd","Type":"ContainerDied","Data":"68c081537859e48cac0d70a4fcd8ca0ff164c7eec35922d09962d3b0f66e08de"} Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.400675 5108 scope.go:117] "RemoveContainer" containerID="5486a5369ee6807c8ca56ed6196786f4085e1c979dbbd30a3ffa6238270af407" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.404630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" event={"ID":"6bf79def-e801-4283-9dcf-dc94d07e4ce7","Type":"ContainerDied","Data":"95340582a5d80262d0b4bed25729f485b6b81519ce917f8cca0b750a62777415"} Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.404722 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.427926 5108 scope.go:117] "RemoveContainer" containerID="9d06c8fe1744806b6a7cb930eefb05bbfcb5ace06fee7045171fa1b68f0f3ded" Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.434685 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.441035 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f6bd77fc8-wrqmg"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.445364 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.450720 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bcd8cdb5-6vm5g"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.486633 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:24 crc kubenswrapper[5108]: I0202 00:15:24.486690 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:24 crc kubenswrapper[5108]: W0202 00:15:24.493577 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77d8873e_3275_40a4_987d_a8d2f5489461.slice/crio-cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212 WatchSource:0}: Error finding container cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212: Status 404 returned error can't find the container with id cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212 Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.411590 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerStarted","Data":"6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.411636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerStarted","Data":"cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.412041 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerStarted","Data":"51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414642 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerStarted","Data":"bd55002ad86a550361e62870063a3fae4c4e9cc5bee2e68716b86baa8fdcd306"} Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.414946 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.437662 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" podStartSLOduration=3.437632329 podStartE2EDuration="3.437632329s" podCreationTimestamp="2026-02-02 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:25.4324713 +0000 UTC m=+324.707968250" watchObservedRunningTime="2026-02-02 00:15:25.437632329 +0000 UTC m=+324.713129259" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.592602 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e53688-b891-48f3-a8ac-3b2843a5a8bd" path="/var/lib/kubelet/pods/29e53688-b891-48f3-a8ac-3b2843a5a8bd/volumes" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.594810 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf79def-e801-4283-9dcf-dc94d07e4ce7" path="/var/lib/kubelet/pods/6bf79def-e801-4283-9dcf-dc94d07e4ce7/volumes" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.595737 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.630926 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" podStartSLOduration=3.630904513 podStartE2EDuration="3.630904513s" podCreationTimestamp="2026-02-02 00:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:25.453995126 +0000 UTC m=+324.729492086" watchObservedRunningTime="2026-02-02 00:15:25.630904513 +0000 UTC m=+324.906401443" Feb 02 00:15:25 crc kubenswrapper[5108]: I0202 00:15:25.818326 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.058580 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.060817 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-52cvp" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" containerID="cri-o://44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.071147 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.071569 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8l8nm" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" containerID="cri-o://0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.091768 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.092089 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" containerID="cri-o://5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.110440 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.112531 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wzh6n" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" containerID="cri-o://7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.122343 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.123861 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4h5k" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" containerID="cri-o://3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" gracePeriod=30 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.131388 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.148860 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.149178 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.244573 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245104 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245152 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.245274 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.346710 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.346761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347077 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.347420 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e18aabab-6cfe-4b88-9efd-a44ecbcace87-tmp\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.348303 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.354630 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18aabab-6cfe-4b88-9efd-a44ecbcace87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.366517 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktf9\" (UniqueName: \"kubernetes.io/projected/e18aabab-6cfe-4b88-9efd-a44ecbcace87-kube-api-access-4ktf9\") pod \"marketplace-operator-547dbd544d-t6j5g\" (UID: \"e18aabab-6cfe-4b88-9efd-a44ecbcace87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.462728 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f60e56b-3881-49ee-be41-5435327c1be3" containerID="5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.462926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.463017 5108 scope.go:117] "RemoveContainer" containerID="17a3c312150e2ad187bcb50ece3a0a3479395c7e181149518d0b3bec568dcd5a" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.468473 5108 generic.go:358] "Generic (PLEG): container finished" podID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerID="3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.468632 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.473283 5108 generic.go:358] "Generic (PLEG): container finished" podID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerID="44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.473407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.480534 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerID="0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.480760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.503660 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerID="7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" exitCode=0 Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.503770 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd"} Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.508667 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.523617 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553065 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553173 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.553315 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.557017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities" (OuterVolumeSpecName: "utilities") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.562648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9" (OuterVolumeSpecName: "kube-api-access-p7wl9") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "kube-api-access-p7wl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.590776 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.591702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.655554 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656880 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656891 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.656955 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657044 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") pod \"ef823528-7549-4a91-83c9-e5b243ecb37c\" (UID: \"ef823528-7549-4a91-83c9-e5b243ecb37c\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657075 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") pod \"7f60e56b-3881-49ee-be41-5435327c1be3\" (UID: \"7f60e56b-3881-49ee-be41-5435327c1be3\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657119 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") pod \"d1e2eec1-1c52-4e62-b697-b308e89e1377\" (UID: \"d1e2eec1-1c52-4e62-b697-b308e89e1377\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657516 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657532 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7wl9\" (UniqueName: \"kubernetes.io/projected/ef823528-7549-4a91-83c9-e5b243ecb37c-kube-api-access-p7wl9\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.657924 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp" (OuterVolumeSpecName: "tmp") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: W0202 00:15:28.658719 5108 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ef823528-7549-4a91-83c9-e5b243ecb37c/volumes/kubernetes.io~empty-dir/catalog-content Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.658753 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef823528-7549-4a91-83c9-e5b243ecb37c" (UID: "ef823528-7549-4a91-83c9-e5b243ecb37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.659143 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.661157 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities" (OuterVolumeSpecName: "utilities") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.662035 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs" (OuterVolumeSpecName: "kube-api-access-55fbs") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "kube-api-access-55fbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.662365 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc" (OuterVolumeSpecName: "kube-api-access-9f7kc") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "kube-api-access-9f7kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.663445 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7f60e56b-3881-49ee-be41-5435327c1be3" (UID: "7f60e56b-3881-49ee-be41-5435327c1be3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.669925 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.675178 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.731836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1e2eec1-1c52-4e62-b697-b308e89e1377" (UID: "d1e2eec1-1c52-4e62-b697-b308e89e1377"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758730 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758862 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758933 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.758960 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") pod \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\" (UID: \"ab8f756d-4492-4dfc-ae46-80bb93dd6d86\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") pod \"c7a5230e-8980-4561-bfb3-015283fcbaa4\" (UID: \"c7a5230e-8980-4561-bfb3-015283fcbaa4\") " Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759350 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f60e56b-3881-49ee-be41-5435327c1be3-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759373 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759387 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9f7kc\" (UniqueName: \"kubernetes.io/projected/7f60e56b-3881-49ee-be41-5435327c1be3-kube-api-access-9f7kc\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759400 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55fbs\" (UniqueName: \"kubernetes.io/projected/d1e2eec1-1c52-4e62-b697-b308e89e1377-kube-api-access-55fbs\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759411 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef823528-7549-4a91-83c9-e5b243ecb37c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759425 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759436 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e2eec1-1c52-4e62-b697-b308e89e1377-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759448 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f60e56b-3881-49ee-be41-5435327c1be3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.759836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities" (OuterVolumeSpecName: "utilities") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.760094 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities" (OuterVolumeSpecName: "utilities") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.763371 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t" (OuterVolumeSpecName: "kube-api-access-lmw2t") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "kube-api-access-lmw2t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.764762 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d" (OuterVolumeSpecName: "kube-api-access-drd6d") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "kube-api-access-drd6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.772714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7a5230e-8980-4561-bfb3-015283fcbaa4" (UID: "c7a5230e-8980-4561-bfb3-015283fcbaa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860697 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860737 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860747 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmw2t\" (UniqueName: \"kubernetes.io/projected/c7a5230e-8980-4561-bfb3-015283fcbaa4-kube-api-access-lmw2t\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860761 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drd6d\" (UniqueName: \"kubernetes.io/projected/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-kube-api-access-drd6d\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.860770 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a5230e-8980-4561-bfb3-015283fcbaa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.862173 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab8f756d-4492-4dfc-ae46-80bb93dd6d86" (UID: "ab8f756d-4492-4dfc-ae46-80bb93dd6d86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.949923 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t6j5g"] Feb 02 00:15:28 crc kubenswrapper[5108]: I0202 00:15:28.962079 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8f756d-4492-4dfc-ae46-80bb93dd6d86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512289 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmvtw" event={"ID":"7f60e56b-3881-49ee-be41-5435327c1be3","Type":"ContainerDied","Data":"b13ed7e02312952627a8fe290f3f42545cea89e59d6401fe8e6ee3b38f6bedcd"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.512852 5108 scope.go:117] "RemoveContainer" containerID="5a87ce4dbe06f64afb1f619d8b0c573d04b896291877c1eda1d92c83341dfdde" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.517016 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4h5k" event={"ID":"ab8f756d-4492-4dfc-ae46-80bb93dd6d86","Type":"ContainerDied","Data":"91f5baffdf47edb0dcf278405ff6c3e8bfcf6fb2a306cd416c02fa78eef020a8"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.517055 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4h5k" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.519566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52cvp" event={"ID":"ef823528-7549-4a91-83c9-e5b243ecb37c","Type":"ContainerDied","Data":"f00eee2df222a89df8cd42cafd662c24a80cb3735fd8845f8256dd421fcd07cf"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.519607 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52cvp" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.522714 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8l8nm" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.523437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8l8nm" event={"ID":"d1e2eec1-1c52-4e62-b697-b308e89e1377","Type":"ContainerDied","Data":"eb0a00b12767c4ff782045029b2e342458acfc4bf6b005b9598c899c329f4a88"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.524892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" event={"ID":"e18aabab-6cfe-4b88-9efd-a44ecbcace87","Type":"ContainerStarted","Data":"051efece92d82137dd9b5124a826a948d42ddda520b6f14ed690e01ec2e92d42"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.524923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" event={"ID":"e18aabab-6cfe-4b88-9efd-a44ecbcace87","Type":"ContainerStarted","Data":"b0e2467682612494f5f331113e372242f9f4b19ec7c4adfdf40f6ac8753455cf"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.526138 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.528553 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.529419 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzh6n" event={"ID":"c7a5230e-8980-4561-bfb3-015283fcbaa4","Type":"ContainerDied","Data":"ea9359a1525df7dedd3d0704fa36125a2831836999184f23e64643dd75e53b0e"} Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.533049 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzh6n" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.533095 5108 scope.go:117] "RemoveContainer" containerID="3f0b7cceb8942beae974160beea654ece1ffcbdf5f51cb46e2bcafac40dd76f7" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.560243 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-t6j5g" podStartSLOduration=1.560210997 podStartE2EDuration="1.560210997s" podCreationTimestamp="2026-02-02 00:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:29.558369453 +0000 UTC m=+328.833866463" watchObservedRunningTime="2026-02-02 00:15:29.560210997 +0000 UTC m=+328.835707927" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.573349 5108 scope.go:117] "RemoveContainer" containerID="5d731cd91d7fa626117bbc5d945723e255f66a42540c3ed2667dd196c604f711" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.610538 5108 scope.go:117] "RemoveContainer" containerID="c8b60dd30800821a50c8edf3cedf017fa85abf0860ba13bd51115ac055be3dc4" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.610660 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.616270 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8l8nm"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.659632 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.663931 5108 scope.go:117] "RemoveContainer" containerID="44c29c35f3f042606025783238fe84449fa274df709647a8bb2c6f5b25f6ea6a" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.684449 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4h5k"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.698381 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.702990 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmvtw"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.707143 5108 scope.go:117] "RemoveContainer" containerID="e6aef248a8876a5e2dc03274ba4ae95994c688af754968e8c9c65f4a76f03504" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.707311 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.710636 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-52cvp"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.713799 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.722048 5108 scope.go:117] "RemoveContainer" containerID="9b5a92a0aba545b8dbaeed6f9c1fc9550f60e0adaa5e10b74e9cc24a24cfad00" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.724294 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzh6n"] Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.738963 5108 scope.go:117] "RemoveContainer" containerID="0df55c9f0ebaec40aacdfbba7ebb6e0073cb9d22b3cdc2120d6cd95d09159f3c" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.755005 5108 scope.go:117] "RemoveContainer" containerID="f739b14449c93c7de2447b64c031f8bff42355230b104d5359e8914ee83f1bb1" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.770218 5108 scope.go:117] "RemoveContainer" containerID="f04bb6768ab8660dd418d641eb48dd64d23f0bc1405200098b46dd1e736803c3" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.785058 5108 scope.go:117] "RemoveContainer" containerID="7027daeb8294c638005dbc109971ebb173c299ff05d37653d85c7855028e63bd" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.796563 5108 scope.go:117] "RemoveContainer" containerID="9a151e0c7d30d225dcdec2ca4f289d179587e1b95d1e6242438eb1c220d1f684" Feb 02 00:15:29 crc kubenswrapper[5108]: I0202 00:15:29.812297 5108 scope.go:117] "RemoveContainer" containerID="2e1ed35cecd83ec6e1cd535df757ea287981a6c7aebb8cec80b33fdbbc5c5139" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.274464 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.274995 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275014 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275028 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275034 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275043 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275049 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275062 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275067 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275075 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275080 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275091 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275096 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275109 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275115 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275127 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275133 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275143 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275149 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275156 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275161 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="extract-content" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275168 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275174 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275181 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275186 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275194 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275199 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="extract-utilities" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275324 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275334 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275343 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275349 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275357 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275367 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" containerName="registry-server" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275460 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.275467 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" containerName="marketplace-operator" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591839 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.591640 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.595079 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.640624 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.640831 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.643132 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692632 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692882 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692909 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.692933 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.793786 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.793847 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794336 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-utilities\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794381 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cf2dc5-b96a-4ed9-acfe-435ef357e479-catalog-content\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794445 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794783 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-catalog-content\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.794890 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fc8227-87b8-4b48-9efa-da7031ec6c27-utilities\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.816241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4zt\" (UniqueName: \"kubernetes.io/projected/32fc8227-87b8-4b48-9efa-da7031ec6c27-kube-api-access-kd4zt\") pod \"certified-operators-66j84\" (UID: \"32fc8227-87b8-4b48-9efa-da7031ec6c27\") " pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.816298 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsm5n\" (UniqueName: \"kubernetes.io/projected/47cf2dc5-b96a-4ed9-acfe-435ef357e479-kube-api-access-hsm5n\") pod \"community-operators-rttj6\" (UID: \"47cf2dc5-b96a-4ed9-acfe-435ef357e479\") " pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.912554 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:30 crc kubenswrapper[5108]: I0202 00:15:30.960403 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.211143 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttj6"] Feb 02 00:15:31 crc kubenswrapper[5108]: W0202 00:15:31.215002 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47cf2dc5_b96a_4ed9_acfe_435ef357e479.slice/crio-e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378 WatchSource:0}: Error finding container e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378: Status 404 returned error can't find the container with id e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378 Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.342402 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66j84"] Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.566466 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f60e56b-3881-49ee-be41-5435327c1be3" path="/var/lib/kubelet/pods/7f60e56b-3881-49ee-be41-5435327c1be3/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.567401 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8f756d-4492-4dfc-ae46-80bb93dd6d86" path="/var/lib/kubelet/pods/ab8f756d-4492-4dfc-ae46-80bb93dd6d86/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.568446 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a5230e-8980-4561-bfb3-015283fcbaa4" path="/var/lib/kubelet/pods/c7a5230e-8980-4561-bfb3-015283fcbaa4/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.569811 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e2eec1-1c52-4e62-b697-b308e89e1377" path="/var/lib/kubelet/pods/d1e2eec1-1c52-4e62-b697-b308e89e1377/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.577562 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef823528-7549-4a91-83c9-e5b243ecb37c" path="/var/lib/kubelet/pods/ef823528-7549-4a91-83c9-e5b243ecb37c/volumes" Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.578239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerStarted","Data":"0dd82895d8d5d0659dc7fa38f7be9b023ed8b7d64300cb40f8165b2618660d76"} Feb 02 00:15:31 crc kubenswrapper[5108]: I0202 00:15:31.578273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerStarted","Data":"e5fd5c01044477e625ce0f1585cf68755d03a7346d001f10c8956bec5867d378"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.571172 5108 generic.go:358] "Generic (PLEG): container finished" podID="32fc8227-87b8-4b48-9efa-da7031ec6c27" containerID="d959d84a0f4b7b71870495427d00ae74eb4e53a953103b78a04200808fa086cd" exitCode=0 Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.571334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerDied","Data":"d959d84a0f4b7b71870495427d00ae74eb4e53a953103b78a04200808fa086cd"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.574950 5108 generic.go:358] "Generic (PLEG): container finished" podID="47cf2dc5-b96a-4ed9-acfe-435ef357e479" containerID="a11015bd30daa66b35f11475c271f148a8c0e46d729b4f21e99d0f802f918818" exitCode=0 Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.575030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerDied","Data":"a11015bd30daa66b35f11475c271f148a8c0e46d729b4f21e99d0f802f918818"} Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.677565 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950693 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.950526 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:32 crc kubenswrapper[5108]: I0202 00:15:32.953624 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030247 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030316 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.030353 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.069172 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.069572 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.072370 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.131966 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.132662 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.133073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.157010 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"redhat-marketplace-cckv4\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.233844 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234588 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-catalog-content\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.234877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-utilities\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.256919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76p7j\" (UniqueName: \"kubernetes.io/projected/07e00e0c-ae6b-40eb-b439-06e770ecfc2a-kube-api-access-76p7j\") pod \"redhat-operators-jwrx9\" (UID: \"07e00e0c-ae6b-40eb-b439-06e770ecfc2a\") " pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.296343 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.386887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.726198 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:15:33 crc kubenswrapper[5108]: W0202 00:15:33.739441 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cf96b4d_fc9a_4ed1_9383_fb367f5a05de.slice/crio-8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0 WatchSource:0}: Error finding container 8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0: Status 404 returned error can't find the container with id 8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0 Feb 02 00:15:33 crc kubenswrapper[5108]: I0202 00:15:33.937319 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwrx9"] Feb 02 00:15:33 crc kubenswrapper[5108]: W0202 00:15:33.949905 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07e00e0c_ae6b_40eb_b439_06e770ecfc2a.slice/crio-3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1 WatchSource:0}: Error finding container 3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1: Status 404 returned error can't find the container with id 3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.614927 5108 generic.go:358] "Generic (PLEG): container finished" podID="07e00e0c-ae6b-40eb-b439-06e770ecfc2a" containerID="d40fbb7dc5b56f14c50a9e5bb126a49d75f6a90e7aa0cbb941f24d67bc1317f9" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.614974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerDied","Data":"d40fbb7dc5b56f14c50a9e5bb126a49d75f6a90e7aa0cbb941f24d67bc1317f9"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.615712 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerStarted","Data":"3a767d8000380188ba9e582a5942221ecfcdc5629f2d755a861545d42ab829e1"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.623250 5108 generic.go:358] "Generic (PLEG): container finished" podID="32fc8227-87b8-4b48-9efa-da7031ec6c27" containerID="243cfd976efb56c1fbd3914ef3a3b9d9975c07131d7b2126faa470f0685ebaae" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.623325 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerDied","Data":"243cfd976efb56c1fbd3914ef3a3b9d9975c07131d7b2126faa470f0685ebaae"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.628318 5108 generic.go:358] "Generic (PLEG): container finished" podID="47cf2dc5-b96a-4ed9-acfe-435ef357e479" containerID="8dc2e03b98df24dbfda41a5175c2a7c82b40a3bf42a22fa3f2f3d29f101f49ef" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.628381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerDied","Data":"8dc2e03b98df24dbfda41a5175c2a7c82b40a3bf42a22fa3f2f3d29f101f49ef"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635126 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" exitCode=0 Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0"} Feb 02 00:15:34 crc kubenswrapper[5108]: I0202 00:15:34.635327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerStarted","Data":"8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.642819 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66j84" event={"ID":"32fc8227-87b8-4b48-9efa-da7031ec6c27","Type":"ContainerStarted","Data":"666a9143a79043e670103b2fdc2070e9e2a7e8f14e82dd5a4f49644e5d71cb31"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.644591 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttj6" event={"ID":"47cf2dc5-b96a-4ed9-acfe-435ef357e479","Type":"ContainerStarted","Data":"a433a86a43d536e9ad3c94986300b1a6f329f18d06d96689496a472b756c2df2"} Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.661353 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66j84" podStartSLOduration=4.355130613 podStartE2EDuration="5.661335613s" podCreationTimestamp="2026-02-02 00:15:30 +0000 UTC" firstStartedPulling="2026-02-02 00:15:32.572924723 +0000 UTC m=+331.848421683" lastFinishedPulling="2026-02-02 00:15:33.879129753 +0000 UTC m=+333.154626683" observedRunningTime="2026-02-02 00:15:35.660338513 +0000 UTC m=+334.935835473" watchObservedRunningTime="2026-02-02 00:15:35.661335613 +0000 UTC m=+334.936832543" Feb 02 00:15:35 crc kubenswrapper[5108]: I0202 00:15:35.685261 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rttj6" podStartSLOduration=4.423021046 podStartE2EDuration="5.685237998s" podCreationTimestamp="2026-02-02 00:15:30 +0000 UTC" firstStartedPulling="2026-02-02 00:15:32.576042245 +0000 UTC m=+331.851539175" lastFinishedPulling="2026-02-02 00:15:33.838259197 +0000 UTC m=+333.113756127" observedRunningTime="2026-02-02 00:15:35.680488158 +0000 UTC m=+334.955985098" watchObservedRunningTime="2026-02-02 00:15:35.685237998 +0000 UTC m=+334.960734928" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.198747 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" containerID="cri-o://527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" gracePeriod=30 Feb 02 00:15:36 crc kubenswrapper[5108]: E0202 00:15:36.380987 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07e00e0c_ae6b_40eb_b439_06e770ecfc2a.slice/crio-conmon-d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ba194a_1171_4ed4_a843_0c39ac61d268.slice/crio-conmon-527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe.scope\": RecentStats: unable to find data in memory cache]" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.456712 5108 patch_prober.go:28] interesting pod/image-registry-66587d64c8-mjr86 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" start-of-body= Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.456802 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.651402 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.651457 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c"} Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.653533 5108 generic.go:358] "Generic (PLEG): container finished" podID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerID="527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.653751 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerDied","Data":"527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe"} Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.658465 5108 generic.go:358] "Generic (PLEG): container finished" podID="07e00e0c-ae6b-40eb-b439-06e770ecfc2a" containerID="d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2" exitCode=0 Feb 02 00:15:36 crc kubenswrapper[5108]: I0202 00:15:36.659915 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerDied","Data":"d6d1ceb2d019203e910a84570ad552dc3de6d75db6f95ea52f0fd54aab6024d2"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.228468 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302648 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302738 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302771 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302902 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.302925 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.303090 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.303144 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") pod \"51ba194a-1171-4ed4-a843-0c39ac61d268\" (UID: \"51ba194a-1171-4ed4-a843-0c39ac61d268\") " Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.304482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.305032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.316885 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.317185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.324397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.325928 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.327397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn" (OuterVolumeSpecName: "kube-api-access-sqbvn") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "kube-api-access-sqbvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.328867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "51ba194a-1171-4ed4-a843-0c39ac61d268" (UID: "51ba194a-1171-4ed4-a843-0c39ac61d268"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404812 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404849 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqbvn\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-kube-api-access-sqbvn\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404860 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ba194a-1171-4ed4-a843-0c39ac61d268-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404870 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404879 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ba194a-1171-4ed4-a843-0c39ac61d268-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404887 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.404895 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ba194a-1171-4ed4-a843-0c39ac61d268-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.667407 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerStarted","Data":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.668821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" event={"ID":"51ba194a-1171-4ed4-a843-0c39ac61d268","Type":"ContainerDied","Data":"1447dcac9c96a7085eca20122133eb4f717b3af0915a27a86280d315ab8e69c0"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.668858 5108 scope.go:117] "RemoveContainer" containerID="527145b28c45c3ea8eb6f6c44f7c51865dd5843b1597aa9cf927f7436a5c19fe" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.669038 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mjr86" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.671832 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwrx9" event={"ID":"07e00e0c-ae6b-40eb-b439-06e770ecfc2a","Type":"ContainerStarted","Data":"af70f43b3c041d3cb1b22e029fe41d4a22fa982aa4755053d2298e608695b0ba"} Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.701128 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cckv4" podStartSLOduration=4.838770985 podStartE2EDuration="5.701113241s" podCreationTimestamp="2026-02-02 00:15:32 +0000 UTC" firstStartedPulling="2026-02-02 00:15:34.636484394 +0000 UTC m=+333.911981324" lastFinishedPulling="2026-02-02 00:15:35.49882665 +0000 UTC m=+334.774323580" observedRunningTime="2026-02-02 00:15:37.696756932 +0000 UTC m=+336.972253862" watchObservedRunningTime="2026-02-02 00:15:37.701113241 +0000 UTC m=+336.976610171" Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.716019 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.717794 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mjr86"] Feb 02 00:15:37 crc kubenswrapper[5108]: I0202 00:15:37.730934 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwrx9" podStartSLOduration=4.85897181 podStartE2EDuration="5.73092171s" podCreationTimestamp="2026-02-02 00:15:32 +0000 UTC" firstStartedPulling="2026-02-02 00:15:34.616735551 +0000 UTC m=+333.892232511" lastFinishedPulling="2026-02-02 00:15:35.488685481 +0000 UTC m=+334.764182411" observedRunningTime="2026-02-02 00:15:37.727955492 +0000 UTC m=+337.003452432" watchObservedRunningTime="2026-02-02 00:15:37.73092171 +0000 UTC m=+337.006418640" Feb 02 00:15:39 crc kubenswrapper[5108]: I0202 00:15:39.565107 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" path="/var/lib/kubelet/pods/51ba194a-1171-4ed4-a843-0c39ac61d268/volumes" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.913740 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.914095 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.961883 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.961945 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:40 crc kubenswrapper[5108]: I0202 00:15:40.965570 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.008806 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.747258 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66j84" Feb 02 00:15:41 crc kubenswrapper[5108]: I0202 00:15:41.756455 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rttj6" Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.237691 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.238416 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" containerID="cri-o://6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" gracePeriod=30 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.270454 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.270742 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" containerID="cri-o://51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" gracePeriod=30 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.704374 5108 generic.go:358] "Generic (PLEG): container finished" podID="77d8873e-3275-40a4-987d-a8d2f5489461" containerID="6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" exitCode=0 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.704960 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerDied","Data":"6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0"} Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.706758 5108 generic.go:358] "Generic (PLEG): container finished" podID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerID="51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" exitCode=0 Feb 02 00:15:42 crc kubenswrapper[5108]: I0202 00:15:42.706879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerDied","Data":"51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.296984 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.297352 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.345020 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.387467 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.387517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.396087 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425324 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425879 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425896 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425906 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.425912 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.426015 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="51ba194a-1171-4ed4-a843-0c39ac61d268" containerName="registry" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.426025 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" containerName="route-controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.500497 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.500864 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501343 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp" (OuterVolumeSpecName: "tmp") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501468 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.501569 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") pod \"36503b52-c5de-4acc-9b2d-4b006a58c586\" (UID: \"36503b52-c5de-4acc-9b2d-4b006a58c586\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502104 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36503b52-c5de-4acc-9b2d-4b006a58c586-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502137 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca" (OuterVolumeSpecName: "client-ca") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.502766 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config" (OuterVolumeSpecName: "config") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.506957 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.508947 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn" (OuterVolumeSpecName: "kube-api-access-q5gmn") pod "36503b52-c5de-4acc-9b2d-4b006a58c586" (UID: "36503b52-c5de-4acc-9b2d-4b006a58c586"). InnerVolumeSpecName "kube-api-access-q5gmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517624 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517809 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.517928 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607290 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607331 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36503b52-c5de-4acc-9b2d-4b006a58c586-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607343 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36503b52-c5de-4acc-9b2d-4b006a58c586-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.607354 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5gmn\" (UniqueName: \"kubernetes.io/projected/36503b52-c5de-4acc-9b2d-4b006a58c586-kube-api-access-q5gmn\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.682911 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708229 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708279 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708304 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708325 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.708364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.714865 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" event={"ID":"36503b52-c5de-4acc-9b2d-4b006a58c586","Type":"ContainerDied","Data":"bd55002ad86a550361e62870063a3fae4c4e9cc5bee2e68716b86baa8fdcd306"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.714914 5108 scope.go:117] "RemoveContainer" containerID="51422b9b14c5e121e52c764cd05f2c885e1a9040876867e3b6e98ed49215c05a" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.715059 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.719023 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.719030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65678dd567-lql72" event={"ID":"77d8873e-3275-40a4-987d-a8d2f5489461","Type":"ContainerDied","Data":"cbbb9d530c606d7c20d199a0daee8fc2b7af8b3c2f71306efb862a8569b37212"} Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.739730 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740364 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740382 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.740484 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" containerName="controller-manager" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.748435 5108 scope.go:117] "RemoveContainer" containerID="6d76ca120eed12f5955fe4993b5e130be9e960cdb6b5ad865d61be03b84b9de0" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786049 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786084 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786141 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwrx9" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786177 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786187 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79b98f778c-rmbgx"] Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.786387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811585 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811686 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.811710 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812349 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp" (OuterVolumeSpecName: "tmp") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812903 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca" (OuterVolumeSpecName: "client-ca") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.812925 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813095 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config" (OuterVolumeSpecName: "config") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813275 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") pod \"77d8873e-3275-40a4-987d-a8d2f5489461\" (UID: \"77d8873e-3275-40a4-987d-a8d2f5489461\") " Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813432 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813475 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813590 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813694 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.813898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.815972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-client-ca\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.817988 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818468 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc5b803c-69f0-47e3-89b1-54dadfc985a6-tmp\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818738 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818769 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77d8873e-3275-40a4-987d-a8d2f5489461-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818789 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.818804 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77d8873e-3275-40a4-987d-a8d2f5489461-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.819579 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5b803c-69f0-47e3-89b1-54dadfc985a6-config\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.820530 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.828661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5b803c-69f0-47e3-89b1-54dadfc985a6-serving-cert\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.840449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5" (OuterVolumeSpecName: "kube-api-access-w4kx5") pod "77d8873e-3275-40a4-987d-a8d2f5489461" (UID: "77d8873e-3275-40a4-987d-a8d2f5489461"). InnerVolumeSpecName "kube-api-access-w4kx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.848098 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghk22\" (UniqueName: \"kubernetes.io/projected/cc5b803c-69f0-47e3-89b1-54dadfc985a6-kube-api-access-ghk22\") pod \"route-controller-manager-655fbf5f68-mccmg\" (UID: \"cc5b803c-69f0-47e3-89b1-54dadfc985a6\") " pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.865598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.922899 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.922950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923154 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923253 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923277 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.923768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924068 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4kx5\" (UniqueName: \"kubernetes.io/projected/77d8873e-3275-40a4-987d-a8d2f5489461-kube-api-access-w4kx5\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924092 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77d8873e-3275-40a4-987d-a8d2f5489461-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-client-ca\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.924992 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-tmp\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.925394 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-config\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.925507 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-proxy-ca-bundles\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.932363 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-serving-cert\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:43 crc kubenswrapper[5108]: I0202 00:15:43.945058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhrz\" (UniqueName: \"kubernetes.io/projected/27cfbd17-fe89-42f2-8cbf-ba0587c2e216-kube-api-access-wdhrz\") pod \"controller-manager-577b8bfd5c-8n7dp\" (UID: \"27cfbd17-fe89-42f2-8cbf-ba0587c2e216\") " pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.064063 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.067875 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65678dd567-lql72"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.100939 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.325918 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg"] Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.529817 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp"] Feb 02 00:15:44 crc kubenswrapper[5108]: W0202 00:15:44.539682 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27cfbd17_fe89_42f2_8cbf_ba0587c2e216.slice/crio-7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def WatchSource:0}: Error finding container 7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def: Status 404 returned error can't find the container with id 7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.724886 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" event={"ID":"27cfbd17-fe89-42f2-8cbf-ba0587c2e216","Type":"ContainerStarted","Data":"91a24f251d05389d13c6a20c13002484b1140f85b6cb416ae2bde6d84d328b2a"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.724948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" event={"ID":"27cfbd17-fe89-42f2-8cbf-ba0587c2e216","Type":"ContainerStarted","Data":"7442d8683d2c3a59fd61250fa32ce56c085c5162e065f24015c9fdfd47774def"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.725330 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.728272 5108 patch_prober.go:28] interesting pod/controller-manager-577b8bfd5c-8n7dp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.728339 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" podUID="27cfbd17-fe89-42f2-8cbf-ba0587c2e216" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" event={"ID":"cc5b803c-69f0-47e3-89b1-54dadfc985a6","Type":"ContainerStarted","Data":"6ed6957229bbe464a286863bea6453b78fd9ff6c983cb4cd8723f9a91d1892b6"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731131 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" event={"ID":"cc5b803c-69f0-47e3-89b1-54dadfc985a6","Type":"ContainerStarted","Data":"986223b802da487179d036e6cc603afcadfbd026d94190f2f1fbb2264bc934fd"} Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.731588 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.748654 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" podStartSLOduration=2.748635552 podStartE2EDuration="2.748635552s" podCreationTimestamp="2026-02-02 00:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:44.748085315 +0000 UTC m=+344.023582255" watchObservedRunningTime="2026-02-02 00:15:44.748635552 +0000 UTC m=+344.024132482" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.773526 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" podStartSLOduration=2.773505845 podStartE2EDuration="2.773505845s" podCreationTimestamp="2026-02-02 00:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:15:44.77265699 +0000 UTC m=+344.048153930" watchObservedRunningTime="2026-02-02 00:15:44.773505845 +0000 UTC m=+344.049002775" Feb 02 00:15:44 crc kubenswrapper[5108]: I0202 00:15:44.996366 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-655fbf5f68-mccmg" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.564874 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36503b52-c5de-4acc-9b2d-4b006a58c586" path="/var/lib/kubelet/pods/36503b52-c5de-4acc-9b2d-4b006a58c586/volumes" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.567105 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d8873e-3275-40a4-987d-a8d2f5489461" path="/var/lib/kubelet/pods/77d8873e-3275-40a4-987d-a8d2f5489461/volumes" Feb 02 00:15:45 crc kubenswrapper[5108]: I0202 00:15:45.749293 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-577b8bfd5c-8n7dp" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.147409 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.157438 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.157606 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.159664 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.160754 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.161286 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.271584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.372693 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.396318 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"auto-csr-approver-29499856-n677f\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.491084 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:00 crc kubenswrapper[5108]: I0202 00:16:00.952600 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:16:01 crc kubenswrapper[5108]: I0202 00:16:01.834737 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerStarted","Data":"6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3"} Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.328690 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-lw9mr" Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.356594 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-lw9mr" Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.853854 5108 generic.go:358] "Generic (PLEG): container finished" podID="b2d68061-8bea-4670-828e-3fd982547198" containerID="b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df" exitCode=0 Feb 02 00:16:04 crc kubenswrapper[5108]: I0202 00:16:04.854043 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerDied","Data":"b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df"} Feb 02 00:16:05 crc kubenswrapper[5108]: I0202 00:16:05.357808 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-04 00:11:04 +0000 UTC" deadline="2026-02-23 01:35:38.826451502 +0000 UTC" Feb 02 00:16:05 crc kubenswrapper[5108]: I0202 00:16:05.357873 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="505h19m33.468584702s" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.288369 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.358290 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-04 00:11:04 +0000 UTC" deadline="2026-02-26 07:39:43.056299098 +0000 UTC" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.358330 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="583h23m36.697972586s" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.455998 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") pod \"b2d68061-8bea-4670-828e-3fd982547198\" (UID: \"b2d68061-8bea-4670-828e-3fd982547198\") " Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.462315 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s" (OuterVolumeSpecName: "kube-api-access-w4s5s") pod "b2d68061-8bea-4670-828e-3fd982547198" (UID: "b2d68061-8bea-4670-828e-3fd982547198"). InnerVolumeSpecName "kube-api-access-w4s5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.557482 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4s5s\" (UniqueName: \"kubernetes.io/projected/b2d68061-8bea-4670-828e-3fd982547198-kube-api-access-w4s5s\") on node \"crc\" DevicePath \"\"" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867576 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499856-n677f" Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499856-n677f" event={"ID":"b2d68061-8bea-4670-828e-3fd982547198","Type":"ContainerDied","Data":"6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3"} Feb 02 00:16:06 crc kubenswrapper[5108]: I0202 00:16:06.867657 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6566d979307f3380d2c4f036bef1b6dbef18c8813653cec90a70aa044d64d0e3" Feb 02 00:17:20 crc kubenswrapper[5108]: I0202 00:17:20.919061 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:17:20 crc kubenswrapper[5108]: I0202 00:17:20.919554 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:17:50 crc kubenswrapper[5108]: I0202 00:17:50.920044 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:17:50 crc kubenswrapper[5108]: I0202 00:17:50.921518 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.158787 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160391 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160407 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.160553 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2d68061-8bea-4670-828e-3fd982547198" containerName="oc" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.166887 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.169665 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170153 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.170187 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.276150 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.379010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.411524 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"auto-csr-approver-29499858-dzzxv\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.500696 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:00 crc kubenswrapper[5108]: I0202 00:18:00.767071 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:18:01 crc kubenswrapper[5108]: I0202 00:18:01.648708 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerStarted","Data":"9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589"} Feb 02 00:18:02 crc kubenswrapper[5108]: I0202 00:18:02.657400 5108 generic.go:358] "Generic (PLEG): container finished" podID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerID="ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3" exitCode=0 Feb 02 00:18:02 crc kubenswrapper[5108]: I0202 00:18:02.657500 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerDied","Data":"ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3"} Feb 02 00:18:03 crc kubenswrapper[5108]: I0202 00:18:03.956075 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.033933 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") pod \"431bfb08-11a6-4c66-893c-650ea32d97b3\" (UID: \"431bfb08-11a6-4c66-893c-650ea32d97b3\") " Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.043468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285" (OuterVolumeSpecName: "kube-api-access-zb285") pod "431bfb08-11a6-4c66-893c-650ea32d97b3" (UID: "431bfb08-11a6-4c66-893c-650ea32d97b3"). InnerVolumeSpecName "kube-api-access-zb285". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.135095 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zb285\" (UniqueName: \"kubernetes.io/projected/431bfb08-11a6-4c66-893c-650ea32d97b3-kube-api-access-zb285\") on node \"crc\" DevicePath \"\"" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" event={"ID":"431bfb08-11a6-4c66-893c-650ea32d97b3","Type":"ContainerDied","Data":"9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589"} Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690645 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c44358844cd8275a7e0441686ab61a17e123743a63f4d684b49bae3cad21589" Feb 02 00:18:04 crc kubenswrapper[5108]: I0202 00:18:04.690795 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499858-dzzxv" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.919445 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.921164 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.921313 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.923313 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:18:20 crc kubenswrapper[5108]: I0202 00:18:20.923478 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" gracePeriod=600 Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811474 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" exitCode=0 Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811569 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e"} Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.811974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} Feb 02 00:18:21 crc kubenswrapper[5108]: I0202 00:18:21.812013 5108 scope.go:117] "RemoveContainer" containerID="7fc8656729a54679c3362014ce0e7b635c6707581fd8f75d82363290e04cf73f" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.134890 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137014 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137066 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.137275 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" containerName="oc" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.142341 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.145948 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.146878 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.146927 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.147348 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.256681 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.358422 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.380725 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"auto-csr-approver-29499860-n8hbz\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.472131 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:00 crc kubenswrapper[5108]: I0202 00:20:00.674029 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.570206 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerStarted","Data":"d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2"} Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.818440 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:20:01 crc kubenswrapper[5108]: I0202 00:20:01.818614 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:20:02 crc kubenswrapper[5108]: I0202 00:20:02.623549 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerStarted","Data":"4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf"} Feb 02 00:20:02 crc kubenswrapper[5108]: I0202 00:20:02.659381 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" podStartSLOduration=1.355852188 podStartE2EDuration="2.659345998s" podCreationTimestamp="2026-02-02 00:20:00 +0000 UTC" firstStartedPulling="2026-02-02 00:20:00.683883691 +0000 UTC m=+599.959380621" lastFinishedPulling="2026-02-02 00:20:01.987377501 +0000 UTC m=+601.262874431" observedRunningTime="2026-02-02 00:20:02.643713064 +0000 UTC m=+601.919210004" watchObservedRunningTime="2026-02-02 00:20:02.659345998 +0000 UTC m=+601.934842938" Feb 02 00:20:03 crc kubenswrapper[5108]: I0202 00:20:03.638048 5108 generic.go:358] "Generic (PLEG): container finished" podID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerID="4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf" exitCode=0 Feb 02 00:20:03 crc kubenswrapper[5108]: I0202 00:20:03.638355 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerDied","Data":"4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf"} Feb 02 00:20:04 crc kubenswrapper[5108]: I0202 00:20:04.941009 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.054622 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") pod \"c1c738be-c891-4aa6-adfd-c1234cf80512\" (UID: \"c1c738be-c891-4aa6-adfd-c1234cf80512\") " Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.063716 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b" (OuterVolumeSpecName: "kube-api-access-5962b") pod "c1c738be-c891-4aa6-adfd-c1234cf80512" (UID: "c1c738be-c891-4aa6-adfd-c1234cf80512"). InnerVolumeSpecName "kube-api-access-5962b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.156965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5962b\" (UniqueName: \"kubernetes.io/projected/c1c738be-c891-4aa6-adfd-c1234cf80512-kube-api-access-5962b\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" event={"ID":"c1c738be-c891-4aa6-adfd-c1234cf80512","Type":"ContainerDied","Data":"d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2"} Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655381 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21f25759a585ac6a1b9f8e54ec2077c9f4fd028ce77db4c07b5381baf4072a2" Feb 02 00:20:05 crc kubenswrapper[5108]: I0202 00:20:05.655313 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499860-n8hbz" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354010 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354724 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" containerID="cri-o://1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.354778 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" containerID="cri-o://c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.539361 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540093 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" containerID="cri-o://e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540210 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" containerID="cri-o://430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540303 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" containerID="cri-o://dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540306 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" containerID="cri-o://5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540346 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540384 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" containerID="cri-o://af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.540239 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" containerID="cri-o://99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.580035 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" containerID="cri-o://32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" gracePeriod=30 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.608165 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.652322 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk"] Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653164 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653192 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653282 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653294 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653308 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653319 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653452 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="kube-rbac-proxy" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653478 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerName="ovnkube-cluster-manager" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.653496 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" containerName="oc" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.661125 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.673677 5108 generic.go:358] "Generic (PLEG): container finished" podID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" exitCode=0 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.673702 5108 generic.go:358] "Generic (PLEG): container finished" podID="0298f7da-43a3-48a4-8e32-b772a82bd62d" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" exitCode=0 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674094 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674391 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674419 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr" event={"ID":"0298f7da-43a3-48a4-8e32-b772a82bd62d","Type":"ContainerDied","Data":"b2c9667b3266dc7724f630d2a6f5b000f311e7134a92929d6e1f8855fc654058"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.674454 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677820 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677846 5108 generic.go:358] "Generic (PLEG): container finished" podID="24f8cedc-9b82-4ef7-a7db-4ce57803e0ce" containerID="9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9" exitCode=2 Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.677951 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerDied","Data":"9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9"} Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680082 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") pod \"0298f7da-43a3-48a4-8e32-b772a82bd62d\" (UID: \"0298f7da-43a3-48a4-8e32-b772a82bd62d\") " Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.680963 5108 scope.go:117] "RemoveContainer" containerID="9c5e5c2ea644c8c1c102faa4d6fd3cbd760e08749ca8a10652fc78ef4d9f0df9" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.681567 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.681612 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.684487 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.695840 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb" (OuterVolumeSpecName: "kube-api-access-rsmhb") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "kube-api-access-rsmhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.696858 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "0298f7da-43a3-48a4-8e32-b772a82bd62d" (UID: "0298f7da-43a3-48a4-8e32-b772a82bd62d"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.729468 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.753700 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: E0202 00:20:06.754640 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754677 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} err="failed to get container status \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754698 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: E0202 00:20:06.754969 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.754992 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} err="failed to get container status \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755008 5108 scope.go:117] "RemoveContainer" containerID="c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755280 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017"} err="failed to get container status \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": rpc error: code = NotFound desc = could not find container \"c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017\": container with ID starting with c6c361eecab5fc0c3f7798bedc1ee127af7183adf71c85f68a8393f03f96f017 not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.755327 5108 scope.go:117] "RemoveContainer" containerID="1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.759805 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe"} err="failed to get container status \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": rpc error: code = NotFound desc = could not find container \"1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe\": container with ID starting with 1c132371dcb3e180b8cf4dd9a48ae5bd77dc98228bc44a308cf47ab4db773ffe not found: ID does not exist" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.781947 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782122 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782155 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782166 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsmhb\" (UniqueName: \"kubernetes.io/projected/0298f7da-43a3-48a4-8e32-b772a82bd62d-kube-api-access-rsmhb\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782177 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0298f7da-43a3-48a4-8e32-b772a82bd62d-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.782186 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0298f7da-43a3-48a4-8e32-b772a82bd62d-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883689 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.883711 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.884632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.884978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68ee81b3-e585-46a6-b47c-666f0c3f187f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.891181 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68ee81b3-e585-46a6-b47c-666f0c3f187f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.902518 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2h9\" (UniqueName: \"kubernetes.io/projected/68ee81b3-e585-46a6-b47c-666f0c3f187f-kube-api-access-8n2h9\") pod \"ovnkube-control-plane-97c9b6c48-c5qrk\" (UID: \"68ee81b3-e585-46a6-b47c-666f0c3f187f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.965160 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-acl-logging/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.965756 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-controller/0.log" Feb 02 00:20:06 crc kubenswrapper[5108]: I0202 00:20:06.966341 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.017803 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.022465 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.029710 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ccnbr"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.034742 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88x4v"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035373 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035395 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035411 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035418 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035427 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035432 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035439 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035446 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035458 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035465 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035476 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035482 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035491 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035498 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035521 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kubecfg-setup" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035527 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kubecfg-setup" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035538 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035544 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035643 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-acl-logging" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035656 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-node" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035665 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="northd" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035675 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovnkube-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035685 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="ovn-controller" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035708 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035715 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="nbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.035722 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerName="sbdb" Feb 02 00:20:07 crc kubenswrapper[5108]: W0202 00:20:07.038534 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68ee81b3_e585_46a6_b47c_666f0c3f187f.slice/crio-5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112 WatchSource:0}: Error finding container 5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112: Status 404 returned error can't find the container with id 5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.046337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.087982 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088113 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088121 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088254 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088299 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088337 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088319 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088342 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088446 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088523 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088552 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088588 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088644 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log" (OuterVolumeSpecName: "node-log") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088759 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088787 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088698 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088717 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash" (OuterVolumeSpecName: "host-slash") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088948 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089019 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088971 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089108 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket" (OuterVolumeSpecName: "log-socket") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089151 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089236 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.088997 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089247 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089280 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089326 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") pod \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\" (UID: \"d0c5973e-49ea-41a0-87d5-c8e867ee8a66\") " Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089436 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089541 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089571 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.089970 5108 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090001 5108 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090019 5108 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090034 5108 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090046 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090057 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090069 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090082 5108 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090093 5108 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090104 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090115 5108 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090127 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090137 5108 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090148 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090160 5108 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090171 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.090182 5108 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.093655 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7" (OuterVolumeSpecName: "kube-api-access-vfgl7") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "kube-api-access-vfgl7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.094032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.112822 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d0c5973e-49ea-41a0-87d5-c8e867ee8a66" (UID: "d0c5973e-49ea-41a0-87d5-c8e867ee8a66"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192301 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192394 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192539 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192643 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192694 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.192730 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193066 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193121 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193177 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193203 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193266 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193755 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193881 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193898 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.193992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194214 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194534 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfgl7\" (UniqueName: \"kubernetes.io/projected/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-kube-api-access-vfgl7\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194568 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.194582 5108 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0c5973e-49ea-41a0-87d5-c8e867ee8a66-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.295913 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.295985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296042 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-node-log\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296214 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296269 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-netd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296358 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296424 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-kubelet\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.296493 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-cni-bin\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297012 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-config\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297148 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-systemd-units\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297175 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297245 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297335 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297447 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297517 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297659 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-env-overrides\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297688 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-log-socket\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297694 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-var-lib-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-ovn-kubernetes\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297750 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-ovn\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-slash\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.297782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-etc-openvswitch\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.298088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-run-systemd\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.298123 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ea50c71-4688-4245-91de-32018497eac8-host-run-netns\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.301589 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ea50c71-4688-4245-91de-32018497eac8-ovnkube-script-lib\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.307682 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea50c71-4688-4245-91de-32018497eac8-ovn-node-metrics-cert\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.333490 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6l68\" (UniqueName: \"kubernetes.io/projected/9ea50c71-4688-4245-91de-32018497eac8-kube-api-access-n6l68\") pod \"ovnkube-node-88x4v\" (UID: \"9ea50c71-4688-4245-91de-32018497eac8\") " pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.380938 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:07 crc kubenswrapper[5108]: W0202 00:20:07.416584 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea50c71_4688_4245_91de_32018497eac8.slice/crio-87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81 WatchSource:0}: Error finding container 87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81: Status 404 returned error can't find the container with id 87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.572076 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0298f7da-43a3-48a4-8e32-b772a82bd62d" path="/var/lib/kubelet/pods/0298f7da-43a3-48a4-8e32-b772a82bd62d/volumes" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686889 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"d43ea4f141e778ce15c7d84c6be8fc1afe568358ebb8a829408c47103ec6b179"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686937 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"ca0b6506433443b50731051676008349603ee2480502143e3963bceceb6c8072"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.686948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" event={"ID":"68ee81b3-e585-46a6-b47c-666f0c3f187f","Type":"ContainerStarted","Data":"5da4f41f2e193c4444f3d8b722f253d9800cfe582ceff9381bc724b5cde0f112"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691291 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-acl-logging/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691705 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-66k84_d0c5973e-49ea-41a0-87d5-c8e867ee8a66/ovn-controller/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691959 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691976 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691983 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691989 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.691995 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692002 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692009 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" exitCode=143 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692016 5108 generic.go:358] "Generic (PLEG): container finished" podID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" exitCode=143 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692137 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692196 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692205 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692210 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692217 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692243 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692250 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692255 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692260 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692266 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692271 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692276 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692282 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692287 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692303 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692310 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692315 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692321 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692327 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692332 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692337 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692342 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692347 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" event={"ID":"d0c5973e-49ea-41a0-87d5-c8e867ee8a66","Type":"ContainerDied","Data":"7a2461c6a473f94ba1ea1904c2b0cd4abbd44d50e56c3ab93bba762c867a78ab"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692362 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692368 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692373 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692378 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692383 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692388 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692393 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692398 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692403 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692418 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.692681 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66k84" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696164 5108 generic.go:358] "Generic (PLEG): container finished" podID="9ea50c71-4688-4245-91de-32018497eac8" containerID="f23786514e364fed84da6806a7ffc903708b5c196da419cc70977c4987182a7a" exitCode=0 Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerDied","Data":"f23786514e364fed84da6806a7ffc903708b5c196da419cc70977c4987182a7a"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.696332 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"87de97bc7249daffbaaad7798d6efe705d2c2b56a894d785f25de56a585e0c81"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.699594 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.699643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q22wv" event={"ID":"24f8cedc-9b82-4ef7-a7db-4ce57803e0ce","Type":"ContainerStarted","Data":"406af3ef6372a6e1fc055ce202a3a9c98241fd5d181894169fdb5f42557f16ec"} Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.715869 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-c5qrk" podStartSLOduration=1.7158488109999999 podStartE2EDuration="1.715848811s" podCreationTimestamp="2026-02-02 00:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:20:07.715260656 +0000 UTC m=+606.990757606" watchObservedRunningTime="2026-02-02 00:20:07.715848811 +0000 UTC m=+606.991345741" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.719523 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.743537 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.761538 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.764043 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66k84"] Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.780559 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.808165 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.825847 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.839395 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.855609 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.873947 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.891320 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.896821 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.896861 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.896885 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.897306 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897373 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897424 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.897946 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897975 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.897991 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.898277 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898308 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898332 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.898603 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898651 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.898681 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899011 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899041 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899062 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899531 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899558 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899578 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.899821 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899837 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.899850 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: E0202 00:20:07.900113 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.900130 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.900142 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901666 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901692 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901938 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.901957 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902245 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902266 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902544 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902568 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902888 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.902909 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903127 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903146 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903376 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903395 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903682 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903703 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.903994 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904029 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904400 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904424 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904658 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.904680 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906184 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906220 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906564 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906599 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906851 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.906873 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907581 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907618 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907897 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.907935 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908372 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908407 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908899 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.908950 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909200 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909238 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909460 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909479 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909694 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909711 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909929 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.909949 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910213 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910260 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910555 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910582 5108 scope.go:117] "RemoveContainer" containerID="5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910817 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54"} err="failed to get container status \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": rpc error: code = NotFound desc = could not find container \"5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54\": container with ID starting with 5e93a5cfbc0cb20e014c931c0fce6a583dca9d100d04aa9bd88e4175a77e1f54 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.910837 5108 scope.go:117] "RemoveContainer" containerID="e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.911053 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1"} err="failed to get container status \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": rpc error: code = NotFound desc = could not find container \"e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1\": container with ID starting with e36ffdcebdabd80e1ba4e3bb96b36abcc32179098cb684d2f01fa95f211123f1 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.911074 5108 scope.go:117] "RemoveContainer" containerID="44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.912850 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f"} err="failed to get container status \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": rpc error: code = NotFound desc = could not find container \"44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f\": container with ID starting with 44fcc653b47a401d24d8f833338567a8648cc4e549dee3110d270879bde5949f not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.912875 5108 scope.go:117] "RemoveContainer" containerID="32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.913650 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb"} err="failed to get container status \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": rpc error: code = NotFound desc = could not find container \"32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb\": container with ID starting with 32c7f95e0bc398a90131638795e3ae0852aeabd954592db494b836518ef6eacb not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.913734 5108 scope.go:117] "RemoveContainer" containerID="af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914537 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba"} err="failed to get container status \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": rpc error: code = NotFound desc = could not find container \"af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba\": container with ID starting with af913f23445a6d68d5a15ca7cd680f2e358ec23e8f9a9fa9a967ef2b448ae8ba not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914563 5108 scope.go:117] "RemoveContainer" containerID="430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914961 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913"} err="failed to get container status \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": rpc error: code = NotFound desc = could not find container \"430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913\": container with ID starting with 430233c8f41fd3a30c60af18e2f2b3efaa62b93186f015c43b57d25c73f3a913 not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.914999 5108 scope.go:117] "RemoveContainer" containerID="99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.915811 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a"} err="failed to get container status \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": rpc error: code = NotFound desc = could not find container \"99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a\": container with ID starting with 99cfff76ecd400b27c2cc5ab5335e6bf39f94996d9baf3a462834947a50a498a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.915832 5108 scope.go:117] "RemoveContainer" containerID="72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916483 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a"} err="failed to get container status \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": rpc error: code = NotFound desc = could not find container \"72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a\": container with ID starting with 72a565df88f5617883b7392a304b953cfb8133c76ce79f14590850d206be964a not found: ID does not exist" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916527 5108 scope.go:117] "RemoveContainer" containerID="dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde" Feb 02 00:20:07 crc kubenswrapper[5108]: I0202 00:20:07.916962 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde"} err="failed to get container status \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": rpc error: code = NotFound desc = could not find container \"dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde\": container with ID starting with dff579e4fdd215ef3a79968757d48ef0ed5e85697d55d0c0ea116f7b2b383bde not found: ID does not exist" Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.707628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"9609f9dbefd49afe34553b2ff7d0ff2adcf2c7e9cf92ab924ac3aca6f0975601"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"7b587dca3afe45a86c2a1781b6863cb401c6e6c9897d81b60491b42517896bec"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"586e2b02889aa5a52eb290641469801a2abfd503960e49e3a04449766cd54cba"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708141 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"76beff8c79e081f35c4e907b6e7547b9fe9e2aaaa1ce368968fcee01609ac155"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"bba4ab8158708ab6919840fd8dcb47d067983480a673033ee09671a2a544a96a"} Feb 02 00:20:08 crc kubenswrapper[5108]: I0202 00:20:08.708158 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"d3aaa098b91666bae033d23ff717732330e1766e040e26e76dd4de0ffc3a107a"} Feb 02 00:20:09 crc kubenswrapper[5108]: I0202 00:20:09.570199 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c5973e-49ea-41a0-87d5-c8e867ee8a66" path="/var/lib/kubelet/pods/d0c5973e-49ea-41a0-87d5-c8e867ee8a66/volumes" Feb 02 00:20:11 crc kubenswrapper[5108]: I0202 00:20:11.738900 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"4964cc7134072d267bed3957a8780d31bc5847382791d3bf48ddaec539be6182"} Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.761815 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" event={"ID":"9ea50c71-4688-4245-91de-32018497eac8","Type":"ContainerStarted","Data":"5319f1393973320f2445163ace38aeba4800d88d1bd4739403799cac20641a48"} Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.762514 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.762586 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.802443 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" podStartSLOduration=6.802388611 podStartE2EDuration="6.802388611s" podCreationTimestamp="2026-02-02 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:20:13.797675039 +0000 UTC m=+613.073171969" watchObservedRunningTime="2026-02-02 00:20:13.802388611 +0000 UTC m=+613.077885541" Feb 02 00:20:13 crc kubenswrapper[5108]: I0202 00:20:13.803256 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:14 crc kubenswrapper[5108]: I0202 00:20:14.773469 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:14 crc kubenswrapper[5108]: I0202 00:20:14.820688 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:46 crc kubenswrapper[5108]: I0202 00:20:46.826831 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88x4v" Feb 02 00:20:50 crc kubenswrapper[5108]: I0202 00:20:50.919721 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:20:50 crc kubenswrapper[5108]: I0202 00:20:50.920130 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.154494 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.155733 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cckv4" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" containerID="cri-o://428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" gracePeriod=30 Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.571443 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737529 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737606 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.737668 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") pod \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\" (UID: \"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de\") " Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.740980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities" (OuterVolumeSpecName: "utilities") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.747500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj" (OuterVolumeSpecName: "kube-api-access-c4ntj") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "kube-api-access-c4ntj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.750950 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" (UID: "5cf96b4d-fc9a-4ed1-9383-fb367f5a05de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840571 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4ntj\" (UniqueName: \"kubernetes.io/projected/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-kube-api-access-c4ntj\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840617 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:12 crc kubenswrapper[5108]: I0202 00:21:12.840627 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234855 5108 generic.go:358] "Generic (PLEG): container finished" podID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" exitCode=0 Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234931 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234978 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckv4" event={"ID":"5cf96b4d-fc9a-4ed1-9383-fb367f5a05de","Type":"ContainerDied","Data":"8f80f46a1e430bbf0bdd470106ede3f5f57d87904d6e8abf62bdcd95557040b0"} Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234981 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckv4" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.234997 5108 scope.go:117] "RemoveContainer" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.261449 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.265292 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckv4"] Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.268468 5108 scope.go:117] "RemoveContainer" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.305152 5108 scope.go:117] "RemoveContainer" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.320794 5108 scope.go:117] "RemoveContainer" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.321494 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": container with ID starting with 428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8 not found: ID does not exist" containerID="428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.321527 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8"} err="failed to get container status \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": rpc error: code = NotFound desc = could not find container \"428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8\": container with ID starting with 428b7cc57f563c07799d2f76afff138aa87f42e08229323731f07a451f13f7f8 not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.321551 5108 scope.go:117] "RemoveContainer" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.322191 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": container with ID starting with c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c not found: ID does not exist" containerID="c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322217 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c"} err="failed to get container status \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": rpc error: code = NotFound desc = could not find container \"c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c\": container with ID starting with c4462c47978df534085261646eb211297974c469b758b193c664425eea81ad2c not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322262 5108 scope.go:117] "RemoveContainer" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: E0202 00:21:13.322533 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": container with ID starting with 66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0 not found: ID does not exist" containerID="66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.322554 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0"} err="failed to get container status \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": rpc error: code = NotFound desc = could not find container \"66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0\": container with ID starting with 66a92fcf085fd40b92b9dfb518ca00744ca7b70d043a3add4f26e039022689a0 not found: ID does not exist" Feb 02 00:21:13 crc kubenswrapper[5108]: I0202 00:21:13.567833 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" path="/var/lib/kubelet/pods/5cf96b4d-fc9a-4ed1-9383-fb367f5a05de/volumes" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.891940 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893183 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-utilities" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893268 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-utilities" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893311 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893328 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893376 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-content" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893394 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="extract-content" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.893616 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5cf96b4d-fc9a-4ed1-9383-fb367f5a05de" containerName="registry-server" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.900078 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.904093 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:21:15 crc kubenswrapper[5108]: I0202 00:21:15.906996 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.084920 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.084998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.085058 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186548 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.186822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.187062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.187098 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.209530 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.220614 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:16 crc kubenswrapper[5108]: I0202 00:21:16.472087 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb"] Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.267577 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="c0e77d5c881bd16da700dc8c585be4c30d3a4c7939538a230b08090258a9f793" exitCode=0 Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.267684 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"c0e77d5c881bd16da700dc8c585be4c30d3a4c7939538a230b08090258a9f793"} Feb 02 00:21:17 crc kubenswrapper[5108]: I0202 00:21:17.268208 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerStarted","Data":"cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9"} Feb 02 00:21:19 crc kubenswrapper[5108]: I0202 00:21:19.284995 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="c2e94d842157cd23f78b3f813a79398dd69be41be0d83e88b3b4d9d1b59a07e8" exitCode=0 Feb 02 00:21:19 crc kubenswrapper[5108]: I0202 00:21:19.285127 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"c2e94d842157cd23f78b3f813a79398dd69be41be0d83e88b3b4d9d1b59a07e8"} Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.293348 5108 generic.go:358] "Generic (PLEG): container finished" podID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerID="9517f4486885d0e23ba040c8061ca727ac2c30500d7f28233a8136c672fbaa25" exitCode=0 Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.293433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"9517f4486885d0e23ba040c8061ca727ac2c30500d7f28233a8136c672fbaa25"} Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.919169 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:21:20 crc kubenswrapper[5108]: I0202 00:21:20.919322 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.550312 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674178 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.674307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") pod \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\" (UID: \"3b577ebd-ea5b-4c70-b43d-826f4ea87884\") " Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.676744 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle" (OuterVolumeSpecName: "bundle") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.681025 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8" (OuterVolumeSpecName: "kube-api-access-lk6k8") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "kube-api-access-lk6k8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.685983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util" (OuterVolumeSpecName: "util") pod "3b577ebd-ea5b-4c70-b43d-826f4ea87884" (UID: "3b577ebd-ea5b-4c70-b43d-826f4ea87884"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775568 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775601 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lk6k8\" (UniqueName: \"kubernetes.io/projected/3b577ebd-ea5b-4c70-b43d-826f4ea87884-kube-api-access-lk6k8\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:21 crc kubenswrapper[5108]: I0202 00:21:21.775659 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b577ebd-ea5b-4c70-b43d-826f4ea87884-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306916 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" event={"ID":"3b577ebd-ea5b-4c70-b43d-826f4ea87884","Type":"ContainerDied","Data":"cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9"} Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306956 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8d25dea57e7e52a5d788f8c0e53956ed52d0364567e99eae8fc75630fe7ca9" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.306996 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.487555 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488599 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="util" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488635 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="util" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488670 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488683 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488707 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="pull" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488724 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="pull" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.488923 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b577ebd-ea5b-4c70-b43d-826f4ea87884" containerName="extract" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.501215 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.501365 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.506438 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.587071 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.588463 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.589392 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690589 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690698 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.690740 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.691656 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.692782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.718749 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:22 crc kubenswrapper[5108]: I0202 00:21:22.818262 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:23 crc kubenswrapper[5108]: I0202 00:21:23.260770 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95"] Feb 02 00:21:23 crc kubenswrapper[5108]: W0202 00:21:23.263569 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a27ac25_eac0_4877_a439_99fd1b7ea671.slice/crio-2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b WatchSource:0}: Error finding container 2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b: Status 404 returned error can't find the container with id 2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b Feb 02 00:21:23 crc kubenswrapper[5108]: I0202 00:21:23.315906 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerStarted","Data":"2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b"} Feb 02 00:21:24 crc kubenswrapper[5108]: I0202 00:21:24.332672 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="1317710cb20fa54818ef19c864a91d464a6c5b33a084db965d81c67d653503b1" exitCode=0 Feb 02 00:21:24 crc kubenswrapper[5108]: I0202 00:21:24.332747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"1317710cb20fa54818ef19c864a91d464a6c5b33a084db965d81c67d653503b1"} Feb 02 00:21:25 crc kubenswrapper[5108]: I0202 00:21:25.341658 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="9916d02f84396cc9813a4fd83613bfb8d021c6ef22c14ded40cdbd8a6b033881" exitCode=0 Feb 02 00:21:25 crc kubenswrapper[5108]: I0202 00:21:25.341831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"9916d02f84396cc9813a4fd83613bfb8d021c6ef22c14ded40cdbd8a6b033881"} Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.307540 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.313415 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.320907 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335516 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335581 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.335715 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.349854 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerID="1bff12ef695180ab2eeb1b7c1cbf67c00db6f4fcaa938091baae8f24ac5a2fa0" exitCode=0 Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.350024 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"1bff12ef695180ab2eeb1b7c1cbf67c00db6f4fcaa938091baae8f24ac5a2fa0"} Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437193 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437312 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.437904 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.438142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.462185 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:26 crc kubenswrapper[5108]: I0202 00:21:26.627833 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.067422 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk"] Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.361878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9"} Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.361946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63"} Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.792423 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859713 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859870 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.859932 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") pod \"2a27ac25-eac0-4877-a439-99fd1b7ea671\" (UID: \"2a27ac25-eac0-4877-a439-99fd1b7ea671\") " Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.866110 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle" (OuterVolumeSpecName: "bundle") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.882606 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms" (OuterVolumeSpecName: "kube-api-access-qtgms") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "kube-api-access-qtgms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.883715 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util" (OuterVolumeSpecName: "util") pod "2a27ac25-eac0-4877-a439-99fd1b7ea671" (UID: "2a27ac25-eac0-4877-a439-99fd1b7ea671"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961912 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtgms\" (UniqueName: \"kubernetes.io/projected/2a27ac25-eac0-4877-a439-99fd1b7ea671-kube-api-access-qtgms\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:27 crc kubenswrapper[5108]: I0202 00:21:27.961976 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a27ac25-eac0-4877-a439-99fd1b7ea671-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374580 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374624 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95" event={"ID":"2a27ac25-eac0-4877-a439-99fd1b7ea671","Type":"ContainerDied","Data":"2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b"} Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.374684 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2782acecc1545acbe0116b664ac6f359ffc5d68d2cbae80e0b8b7da820f75a0b" Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.376636 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9" exitCode=0 Feb 02 00:21:28 crc kubenswrapper[5108]: I0202 00:21:28.376707 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"333342191adc16bebef36b3b962a53cf0d69d89e809bdebb05023d5962f489b9"} Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.784378 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786197 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="pull" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786216 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="pull" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786260 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="util" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786267 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="util" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786289 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786296 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.786398 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a27ac25-eac0-4877-a439-99fd1b7ea671" containerName="extract" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.822085 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.822287 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.826695 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.827695 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-dqcjz\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.828498 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.929361 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.938162 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.939467 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.940348 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.942455 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.942762 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-vjcrz\"" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.944196 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.951082 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:32 crc kubenswrapper[5108]: I0202 00:21:32.958417 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.032714 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.037625 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.040787 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.041942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042096 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.042265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.044818 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-jclnh\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.070190 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2hh\" (UniqueName: \"kubernetes.io/projected/3cae4b55-dd8b-41da-85fd-e3a48cd48a84-kube-api-access-wb2hh\") pod \"obo-prometheus-operator-9bc85b4bf-qx2r6\" (UID: \"3cae4b55-dd8b-41da-85fd-e3a48cd48a84\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.077614 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143541 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143571 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143597 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.143689 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.149188 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.150934 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b30b62b-4640-4186-8cec-9a4bce652c54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld\" (UID: \"7b30b62b-4640-4186-8cec-9a4bce652c54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.151482 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.151872 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea610d63-cdca-43f6-ae36-1021a5cfb158-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8\" (UID: \"ea610d63-cdca-43f6-ae36-1021a5cfb158\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.156897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.241758 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.244993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.245211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.249273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.250171 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-observability-operator-tls\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.254638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-dk6cv\"" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.265738 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.267038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmtd\" (UniqueName: \"kubernetes.io/projected/6b7e0bd1-72e0-4772-a2cf-8287051d3acd-kube-api-access-2cmtd\") pod \"observability-operator-85c68dddb-tdjm6\" (UID: \"6b7e0bd1-72e0-4772-a2cf-8287051d3acd\") " pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.271189 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.280157 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.347395 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.347474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.368186 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.447021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerStarted","Data":"6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3"} Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.448145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.448219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.449556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/600911fd-7824-48ed-a826-60768dce689a-openshift-service-ca\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.479686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxrx\" (UniqueName: \"kubernetes.io/projected/600911fd-7824-48ed-a826-60768dce689a-kube-api-access-jvxrx\") pod \"perses-operator-669c9f96b5-twmfp\" (UID: \"600911fd-7824-48ed-a826-60768dce689a\") " pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.608367 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.776421 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.804337 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8"] Feb 02 00:21:33 crc kubenswrapper[5108]: I0202 00:21:33.923748 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-tdjm6"] Feb 02 00:21:33 crc kubenswrapper[5108]: W0202 00:21:33.946432 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b7e0bd1_72e0_4772_a2cf_8287051d3acd.slice/crio-5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107 WatchSource:0}: Error finding container 5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107: Status 404 returned error can't find the container with id 5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.016629 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld"] Feb 02 00:21:34 crc kubenswrapper[5108]: W0202 00:21:34.023011 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b30b62b_4640_4186_8cec_9a4bce652c54.slice/crio-c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94 WatchSource:0}: Error finding container c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94: Status 404 returned error can't find the container with id c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.029713 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-twmfp"] Feb 02 00:21:34 crc kubenswrapper[5108]: W0202 00:21:34.045111 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod600911fd_7824_48ed_a826_60768dce689a.slice/crio-a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39 WatchSource:0}: Error finding container a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39: Status 404 returned error can't find the container with id a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.464765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" event={"ID":"ea610d63-cdca-43f6-ae36-1021a5cfb158","Type":"ContainerStarted","Data":"a5fc15f70e97a6fe834548387adc8d6465cf96c0a47f06841dbc0e0d2861da35"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.466855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" event={"ID":"3cae4b55-dd8b-41da-85fd-e3a48cd48a84","Type":"ContainerStarted","Data":"1559c49a4ea838d468316facffb55760f3175a55f128844461b7cfae7ed87357"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.467686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" event={"ID":"6b7e0bd1-72e0-4772-a2cf-8287051d3acd","Type":"ContainerStarted","Data":"5074936e087a0b8f1cfc36729e2c3647f8c9d8faaca2eeefc0bfff6014e57107"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.470439 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3" exitCode=0 Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.470633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"6aff36c0ed2c2bc19c286f270a763c69381116735c7a583fda4be9f55c1e84c3"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.474026 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" event={"ID":"7b30b62b-4640-4186-8cec-9a4bce652c54","Type":"ContainerStarted","Data":"c434081c0d72466ce55aea80e7d278e8578862dc7a4ab206c90c22e34400aa94"} Feb 02 00:21:34 crc kubenswrapper[5108]: I0202 00:21:34.477108 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" event={"ID":"600911fd-7824-48ed-a826-60768dce689a","Type":"ContainerStarted","Data":"a6676f5998611b2fde0d579a2cfc6d2fdeb240f104f1ec6f328474358dd6fa39"} Feb 02 00:21:35 crc kubenswrapper[5108]: I0202 00:21:35.491930 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerID="389b38f0f5835f63e2beb4147aa5d526ede7fa13341eee189f8e868c666c3262" exitCode=0 Feb 02 00:21:35 crc kubenswrapper[5108]: I0202 00:21:35.492077 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"389b38f0f5835f63e2beb4147aa5d526ede7fa13341eee189f8e868c666c3262"} Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.493154 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.499303 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.508644 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.508650 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.509030 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.511091 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-xvmlt\"" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.512601 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626704 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626762 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.626859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.729728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.730362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.730395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.742041 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-apiservice-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.751431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5q8j\" (UniqueName: \"kubernetes.io/projected/dbc6504f-e1af-4747-a2b1-3260272984f3-kube-api-access-b5q8j\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.756168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc6504f-e1af-4747-a2b1-3260272984f3-webhook-cert\") pod \"elastic-operator-7b74cb5c57-cx5qg\" (UID: \"dbc6504f-e1af-4747-a2b1-3260272984f3\") " pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.831723 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" Feb 02 00:21:36 crc kubenswrapper[5108]: I0202 00:21:36.892322 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.036961 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.037067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.037088 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") pod \"7fedf68a-9fd7-4344-b2d4-7856f539c455\" (UID: \"7fedf68a-9fd7-4344-b2d4-7856f539c455\") " Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.038129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle" (OuterVolumeSpecName: "bundle") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.049482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr" (OuterVolumeSpecName: "kube-api-access-ld9jr") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "kube-api-access-ld9jr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.064568 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util" (OuterVolumeSpecName: "util") pod "7fedf68a-9fd7-4344-b2d4-7856f539c455" (UID: "7fedf68a-9fd7-4344-b2d4-7856f539c455"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138301 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ld9jr\" (UniqueName: \"kubernetes.io/projected/7fedf68a-9fd7-4344-b2d4-7856f539c455-kube-api-access-ld9jr\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138731 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.138740 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fedf68a-9fd7-4344-b2d4-7856f539c455-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.352420 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7b74cb5c57-cx5qg"] Feb 02 00:21:37 crc kubenswrapper[5108]: W0202 00:21:37.358502 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc6504f_e1af_4747_a2b1_3260272984f3.slice/crio-78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9 WatchSource:0}: Error finding container 78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9: Status 404 returned error can't find the container with id 78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9 Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.550276 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" event={"ID":"dbc6504f-e1af-4747-a2b1-3260272984f3","Type":"ContainerStarted","Data":"78ab70775d4d9d49f222a8d5a28a927a907b417a8425d4dfcbc5e01c1a77eab9"} Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.557193 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.571118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk" event={"ID":"7fedf68a-9fd7-4344-b2d4-7856f539c455","Type":"ContainerDied","Data":"fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63"} Feb 02 00:21:37 crc kubenswrapper[5108]: I0202 00:21:37.571189 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb3a14f5d6a6333e1bb81a6cf5ce121a5e4fa213dad0722af9a09a718dd82c63" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.166419 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167586 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="pull" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167602 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="pull" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167623 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167628 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167652 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="util" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167658 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="util" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.167748 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fedf68a-9fd7-4344-b2d4-7856f539c455" containerName="extract" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.178168 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.180463 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.180735 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-7zlpp\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.182044 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.182825 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.278955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.279049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380402 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.380952 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1820eeba-be2c-4340-843a-2caf82b3b450-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.404661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqds8\" (UniqueName: \"kubernetes.io/projected/1820eeba-be2c-4340-843a-2caf82b3b450-kube-api-access-wqds8\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-kqmcc\" (UID: \"1820eeba-be2c-4340-843a-2caf82b3b450\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.497406 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.687159 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" event={"ID":"ea610d63-cdca-43f6-ae36-1021a5cfb158","Type":"ContainerStarted","Data":"efd0c5cb3d39595715958b29b2ffb4a011b6e94ae5f101156c8b5196922cf11d"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.693414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" event={"ID":"3cae4b55-dd8b-41da-85fd-e3a48cd48a84","Type":"ContainerStarted","Data":"c285341b5f1d6b8da0f004554563d75c71b92ab7d272e55d2c8fc110cb5a5117"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.695424 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" event={"ID":"dbc6504f-e1af-4747-a2b1-3260272984f3","Type":"ContainerStarted","Data":"0937438871d36d99ad44e8724196c5684f8a83c31e378b90c7ed3de2cf3afcfc"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.698352 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" event={"ID":"6b7e0bd1-72e0-4772-a2cf-8287051d3acd","Type":"ContainerStarted","Data":"2e7cd1e77f2c6747ffbe1253b03f9e102710a9f35bb7655694993b33c0de9294"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.699160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.700745 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" event={"ID":"7b30b62b-4640-4186-8cec-9a4bce652c54","Type":"ContainerStarted","Data":"7b6593954e2ba51932bb7bf877bb36be526a0ef9a5d8ecd1da93dfdcc5cb0540"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.702052 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.715862 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" event={"ID":"600911fd-7824-48ed-a826-60768dce689a","Type":"ContainerStarted","Data":"f61ca2f8995d7c71c4a4094622ea8f95ff364d27c81203b4281d2bd9612d4a40"} Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.716083 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.722277 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8" podStartSLOduration=2.879958055 podStartE2EDuration="17.722254706s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.842879659 +0000 UTC m=+693.118376589" lastFinishedPulling="2026-02-02 00:21:48.68517631 +0000 UTC m=+707.960673240" observedRunningTime="2026-02-02 00:21:49.722128562 +0000 UTC m=+708.997625502" watchObservedRunningTime="2026-02-02 00:21:49.722254706 +0000 UTC m=+708.997751636" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.766330 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc"] Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.770312 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qx2r6" podStartSLOduration=2.914821663 podStartE2EDuration="17.770292556s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.817558874 +0000 UTC m=+693.093055804" lastFinishedPulling="2026-02-02 00:21:48.673029767 +0000 UTC m=+707.948526697" observedRunningTime="2026-02-02 00:21:49.753108144 +0000 UTC m=+709.028605104" watchObservedRunningTime="2026-02-02 00:21:49.770292556 +0000 UTC m=+709.045789506" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.819763 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7b74cb5c57-cx5qg" podStartSLOduration=2.512353843 podStartE2EDuration="13.819737034s" podCreationTimestamp="2026-02-02 00:21:36 +0000 UTC" firstStartedPulling="2026-02-02 00:21:37.365220334 +0000 UTC m=+696.640717264" lastFinishedPulling="2026-02-02 00:21:48.672603525 +0000 UTC m=+707.948100455" observedRunningTime="2026-02-02 00:21:49.799977311 +0000 UTC m=+709.075474251" watchObservedRunningTime="2026-02-02 00:21:49.819737034 +0000 UTC m=+709.095233974" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.833011 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld" podStartSLOduration=3.188206047 podStartE2EDuration="17.832989419s" podCreationTimestamp="2026-02-02 00:21:32 +0000 UTC" firstStartedPulling="2026-02-02 00:21:34.02842921 +0000 UTC m=+693.303926140" lastFinishedPulling="2026-02-02 00:21:48.673212582 +0000 UTC m=+707.948709512" observedRunningTime="2026-02-02 00:21:49.82684826 +0000 UTC m=+709.102345200" watchObservedRunningTime="2026-02-02 00:21:49.832989419 +0000 UTC m=+709.108486349" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.902728 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-tdjm6" podStartSLOduration=2.116127264 podStartE2EDuration="16.902708135s" podCreationTimestamp="2026-02-02 00:21:33 +0000 UTC" firstStartedPulling="2026-02-02 00:21:33.955303179 +0000 UTC m=+693.230800109" lastFinishedPulling="2026-02-02 00:21:48.74188406 +0000 UTC m=+708.017380980" observedRunningTime="2026-02-02 00:21:49.864635959 +0000 UTC m=+709.140132909" watchObservedRunningTime="2026-02-02 00:21:49.902708135 +0000 UTC m=+709.178205075" Feb 02 00:21:49 crc kubenswrapper[5108]: I0202 00:21:49.906284 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" podStartSLOduration=2.28108867 podStartE2EDuration="16.906271093s" podCreationTimestamp="2026-02-02 00:21:33 +0000 UTC" firstStartedPulling="2026-02-02 00:21:34.048777199 +0000 UTC m=+693.324274129" lastFinishedPulling="2026-02-02 00:21:48.673959622 +0000 UTC m=+707.949456552" observedRunningTime="2026-02-02 00:21:49.89887155 +0000 UTC m=+709.174368520" watchObservedRunningTime="2026-02-02 00:21:49.906271093 +0000 UTC m=+709.181768033" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.724114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" event={"ID":"1820eeba-be2c-4340-843a-2caf82b3b450","Type":"ContainerStarted","Data":"939fba30b22bf5d03ad8928c8d7d94cd666aeece31637800841885ac4dec14fe"} Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919589 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919673 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.919721 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.920390 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:21:50 crc kubenswrapper[5108]: I0202 00:21:50.920446 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" gracePeriod=600 Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.741660 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" exitCode=0 Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743482 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31"} Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} Feb 02 00:21:51 crc kubenswrapper[5108]: I0202 00:21:51.743534 5108 scope.go:117] "RemoveContainer" containerID="0e2568caf741572a83d3d444d4f4d6722d2e6e9a09c71f1dec22c400db69da1e" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.950575 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.958375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.960586 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.960931 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961139 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961487 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.961566 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.965694 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.966138 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-442s6\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.969076 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 02 00:21:52 crc kubenswrapper[5108]: I0202 00:21:52.977003 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.015455 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034519 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034588 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034618 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034662 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034728 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.034788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035767 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035809 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035832 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.035846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138904 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138948 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.138992 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139079 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139096 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139129 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139144 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139160 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139190 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.139206 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.142620 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143263 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143560 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.143813 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.144077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.144762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.146971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.147626 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.147703 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.151816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.152339 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/91781fe7-72ca-4748-8dcd-5d7d1c275472-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.154474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.163601 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.171495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.174355 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/91781fe7-72ca-4748-8dcd-5d7d1c275472-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"91781fe7-72ca-4748-8dcd-5d7d1c275472\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:53 crc kubenswrapper[5108]: I0202 00:21:53.285952 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.498744 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:21:54 crc kubenswrapper[5108]: W0202 00:21:54.512784 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91781fe7_72ca_4748_8dcd_5d7d1c275472.slice/crio-44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e WatchSource:0}: Error finding container 44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e: Status 404 returned error can't find the container with id 44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.769662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" event={"ID":"1820eeba-be2c-4340-843a-2caf82b3b450","Type":"ContainerStarted","Data":"21402d94ddc844fdfeb341a432b8360de71f168962675a441d21ff47ce0a322c"} Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.771831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"44a372cf31eb90f97681367bc93002df2d4832d7ae8e57cf00b44707d491213e"} Feb 02 00:21:54 crc kubenswrapper[5108]: I0202 00:21:54.801461 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-kqmcc" podStartSLOduration=1.242451817 podStartE2EDuration="5.801443925s" podCreationTimestamp="2026-02-02 00:21:49 +0000 UTC" firstStartedPulling="2026-02-02 00:21:49.766382718 +0000 UTC m=+709.041879648" lastFinishedPulling="2026-02-02 00:21:54.325374836 +0000 UTC m=+713.600871756" observedRunningTime="2026-02-02 00:21:54.79450999 +0000 UTC m=+714.070006920" watchObservedRunningTime="2026-02-02 00:21:54.801443925 +0000 UTC m=+714.076940855" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.665127 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.674037 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.673631 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.677086 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.677102 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.679733 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-jbvdl\"" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.841480 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.841613 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.942912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.942971 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.964458 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffm5f\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-kube-api-access-ffm5f\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.972240 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c526e59-9f54-4c07-9df7-9c254286c8b2-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-gwlkp\" (UID: \"9c526e59-9f54-4c07-9df7-9c254286c8b2\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:21:59 crc kubenswrapper[5108]: I0202 00:21:59.993315 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.133779 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.140981 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.145588 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.145837 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.146684 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.147627 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.248272 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.349995 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.375307 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"auto-csr-approver-29499862-nmjl8\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.443742 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-gwlkp"] Feb 02 00:22:00 crc kubenswrapper[5108]: W0202 00:22:00.453403 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c526e59_9f54_4c07_9df7_9c254286c8b2.slice/crio-46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26 WatchSource:0}: Error finding container 46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26: Status 404 returned error can't find the container with id 46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26 Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.471086 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.678399 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:22:00 crc kubenswrapper[5108]: W0202 00:22:00.693711 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode35e90a5_9be9_4d25_a87f_80c879fadbdb.slice/crio-51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70 WatchSource:0}: Error finding container 51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70: Status 404 returned error can't find the container with id 51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70 Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.728673 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-twmfp" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.823785 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" event={"ID":"9c526e59-9f54-4c07-9df7-9c254286c8b2","Type":"ContainerStarted","Data":"46c945a7ea295aefdb1ca3889db4c43a13d88dbf73dd7b5482d899e06884eb26"} Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.835433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerStarted","Data":"51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70"} Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.838282 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.842251 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.846143 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-ttpr4\"" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.856643 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.962565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:00 crc kubenswrapper[5108]: I0202 00:22:00.962625 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.063936 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.064027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.084978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qxf\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-kube-api-access-77qxf\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.087959 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36067e0f-9235-409f-83d9-125165d03451-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-md5xl\" (UID: \"36067e0f-9235-409f-83d9-125165d03451\") " pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.165167 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.504731 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-md5xl"] Feb 02 00:22:01 crc kubenswrapper[5108]: I0202 00:22:01.873735 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" event={"ID":"36067e0f-9235-409f-83d9-125165d03451","Type":"ContainerStarted","Data":"a4801c99b691d85a15f2704d4ff55b4833a3bad762b25c878f2c36ff5005a2c5"} Feb 02 00:22:02 crc kubenswrapper[5108]: I0202 00:22:02.883939 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerStarted","Data":"ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975"} Feb 02 00:22:02 crc kubenswrapper[5108]: I0202 00:22:02.961706 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" podStartSLOduration=1.9612024319999999 podStartE2EDuration="2.961686276s" podCreationTimestamp="2026-02-02 00:22:00 +0000 UTC" firstStartedPulling="2026-02-02 00:22:00.696690064 +0000 UTC m=+719.972186994" lastFinishedPulling="2026-02-02 00:22:01.697173908 +0000 UTC m=+720.972670838" observedRunningTime="2026-02-02 00:22:02.957328564 +0000 UTC m=+722.232825494" watchObservedRunningTime="2026-02-02 00:22:02.961686276 +0000 UTC m=+722.237183206" Feb 02 00:22:03 crc kubenswrapper[5108]: I0202 00:22:03.899498 5108 generic.go:358] "Generic (PLEG): container finished" podID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerID="ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975" exitCode=0 Feb 02 00:22:03 crc kubenswrapper[5108]: I0202 00:22:03.899697 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerDied","Data":"ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975"} Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.185232 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.279728 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") pod \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\" (UID: \"e35e90a5-9be9-4d25-a87f-80c879fadbdb\") " Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.287059 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg" (OuterVolumeSpecName: "kube-api-access-qzhcg") pod "e35e90a5-9be9-4d25-a87f-80c879fadbdb" (UID: "e35e90a5-9be9-4d25-a87f-80c879fadbdb"). InnerVolumeSpecName "kube-api-access-qzhcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.388299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzhcg\" (UniqueName: \"kubernetes.io/projected/e35e90a5-9be9-4d25-a87f-80c879fadbdb-kube-api-access-qzhcg\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925375 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" event={"ID":"e35e90a5-9be9-4d25-a87f-80c879fadbdb","Type":"ContainerDied","Data":"51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70"} Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925436 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b2d17ed67e42cdac5b2f5f604b170cbdaecb56ea11e9bb1fcb26e25b4fda70" Feb 02 00:22:05 crc kubenswrapper[5108]: I0202 00:22:05.925463 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499862-nmjl8" Feb 02 00:22:06 crc kubenswrapper[5108]: I0202 00:22:06.232902 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:22:06 crc kubenswrapper[5108]: I0202 00:22:06.236782 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499856-n677f"] Feb 02 00:22:07 crc kubenswrapper[5108]: I0202 00:22:07.564433 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d68061-8bea-4670-828e-3fd982547198" path="/var/lib/kubelet/pods/b2d68061-8bea-4670-828e-3fd982547198/volumes" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.380870 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382317 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382331 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.382449 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" containerName="oc" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.430913 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.431068 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.433599 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.562900 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.563414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.563503 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.664790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.665900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.666655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.700130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zqj\" (UniqueName: \"kubernetes.io/projected/13d5efa3-18a2-405c-96ec-e5ee2d3014b2-kube-api-access-x9zqj\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"13d5efa3-18a2-405c-96ec-e5ee2d3014b2\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:14 crc kubenswrapper[5108]: I0202 00:22:14.752821 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.882366 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.888347 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.893124 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-md8ws\"" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.902946 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.998205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:16 crc kubenswrapper[5108]: I0202 00:22:16.998269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.099293 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.099347 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.119604 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrg6\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-kube-api-access-ljrg6\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.120079 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0e17311-6020-462f-9ab7-8db9a5b4fd53-bound-sa-token\") pod \"cert-manager-759f64656b-z8j4s\" (UID: \"f0e17311-6020-462f-9ab7-8db9a5b4fd53\") " pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:17 crc kubenswrapper[5108]: I0202 00:22:17.210057 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-z8j4s" Feb 02 00:22:18 crc kubenswrapper[5108]: I0202 00:22:18.211392 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 02 00:22:18 crc kubenswrapper[5108]: W0202 00:22:18.224483 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d5efa3_18a2_405c_96ec_e5ee2d3014b2.slice/crio-681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205 WatchSource:0}: Error finding container 681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205: Status 404 returned error can't find the container with id 681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205 Feb 02 00:22:18 crc kubenswrapper[5108]: W0202 00:22:18.279921 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0e17311_6020_462f_9ab7_8db9a5b4fd53.slice/crio-c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875 WatchSource:0}: Error finding container c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875: Status 404 returned error can't find the container with id c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875 Feb 02 00:22:18 crc kubenswrapper[5108]: I0202 00:22:18.282787 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-z8j4s"] Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.023431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerStarted","Data":"681ae3ca523005d5137b6d4fc907682c1d49bbef69ee91ba664e7e0be6ab1205"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.025851 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" event={"ID":"36067e0f-9235-409f-83d9-125165d03451","Type":"ContainerStarted","Data":"910fe1e9d1f303676d781b1bae1205ed9252606668b1b865b8fa1f886424c0d6"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.026014 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.028651 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.029849 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" event={"ID":"9c526e59-9f54-4c07-9df7-9c254286c8b2","Type":"ContainerStarted","Data":"01bf8af26ce1df79714b9bcae9bdc6cf8187e634cc20810db6409c6eca49c881"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.031454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-z8j4s" event={"ID":"f0e17311-6020-462f-9ab7-8db9a5b4fd53","Type":"ContainerStarted","Data":"efce5e45a33c822592fb6de999f000bfb240d91475033f7cf55d84ecabbbd810"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.031480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-z8j4s" event={"ID":"f0e17311-6020-462f-9ab7-8db9a5b4fd53","Type":"ContainerStarted","Data":"c83ef1abfc1b77b3d329f48e1f9a225c26c83c8a137bf662b7929943745b7875"} Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.068992 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" podStartSLOduration=2.728748533 podStartE2EDuration="19.068976216s" podCreationTimestamp="2026-02-02 00:22:00 +0000 UTC" firstStartedPulling="2026-02-02 00:22:01.687707684 +0000 UTC m=+720.963204614" lastFinishedPulling="2026-02-02 00:22:18.027935347 +0000 UTC m=+737.303432297" observedRunningTime="2026-02-02 00:22:19.065844698 +0000 UTC m=+738.341341638" watchObservedRunningTime="2026-02-02 00:22:19.068976216 +0000 UTC m=+738.344473146" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.093012 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-z8j4s" podStartSLOduration=3.092989589 podStartE2EDuration="3.092989589s" podCreationTimestamp="2026-02-02 00:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:22:19.087745912 +0000 UTC m=+738.363242842" watchObservedRunningTime="2026-02-02 00:22:19.092989589 +0000 UTC m=+738.368486529" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.177494 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-gwlkp" podStartSLOduration=2.640117852 podStartE2EDuration="20.177474374s" podCreationTimestamp="2026-02-02 00:21:59 +0000 UTC" firstStartedPulling="2026-02-02 00:22:00.460838861 +0000 UTC m=+719.736335791" lastFinishedPulling="2026-02-02 00:22:17.998195363 +0000 UTC m=+737.273692313" observedRunningTime="2026-02-02 00:22:19.121937199 +0000 UTC m=+738.397434129" watchObservedRunningTime="2026-02-02 00:22:19.177474374 +0000 UTC m=+738.452971304" Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.317789 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:22:19 crc kubenswrapper[5108]: I0202 00:22:19.346139 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 02 00:22:21 crc kubenswrapper[5108]: I0202 00:22:21.049791 5108 generic.go:358] "Generic (PLEG): container finished" podID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerID="7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4" exitCode=0 Feb 02 00:22:21 crc kubenswrapper[5108]: I0202 00:22:21.049845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerDied","Data":"7ecd5f58b5fe2f871e7b269b373e5e2fc280e928be4497b883044d2c36a03ab4"} Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.074015 5108 generic.go:358] "Generic (PLEG): container finished" podID="13d5efa3-18a2-405c-96ec-e5ee2d3014b2" containerID="6ccc56e44008c8bdc70fabbd8ac843e8bb5c8f578b2f88a9b867948e4db96b0c" exitCode=0 Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.074113 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerDied","Data":"6ccc56e44008c8bdc70fabbd8ac843e8bb5c8f578b2f88a9b867948e4db96b0c"} Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.077657 5108 generic.go:358] "Generic (PLEG): container finished" podID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerID="6300a2dc28e3c4f04ee436a881ab1be37cfdad5111656a4146dacc4c870adee4" exitCode=0 Feb 02 00:22:24 crc kubenswrapper[5108]: I0202 00:22:24.077710 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerDied","Data":"6300a2dc28e3c4f04ee436a881ab1be37cfdad5111656a4146dacc4c870adee4"} Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.045676 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-md5xl" Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.093808 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"91781fe7-72ca-4748-8dcd-5d7d1c275472","Type":"ContainerStarted","Data":"d2e711aff88f7d44f468273aa8bf1d2828eb4f109c32cda98d1d9f783d366c4c"} Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.094257 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:22:25 crc kubenswrapper[5108]: I0202 00:22:25.149450 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.53675842 podStartE2EDuration="33.149426722s" podCreationTimestamp="2026-02-02 00:21:52 +0000 UTC" firstStartedPulling="2026-02-02 00:21:54.515292646 +0000 UTC m=+713.790789576" lastFinishedPulling="2026-02-02 00:22:18.127960948 +0000 UTC m=+737.403457878" observedRunningTime="2026-02-02 00:22:25.144253627 +0000 UTC m=+744.419750577" watchObservedRunningTime="2026-02-02 00:22:25.149426722 +0000 UTC m=+744.424923652" Feb 02 00:22:28 crc kubenswrapper[5108]: I0202 00:22:28.128728 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"13d5efa3-18a2-405c-96ec-e5ee2d3014b2","Type":"ContainerStarted","Data":"3a950f31bbc63a2355d530a73743f3bf4b083eb86e5f832f78d48525d315daa7"} Feb 02 00:22:28 crc kubenswrapper[5108]: I0202 00:22:28.158480 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=4.853456082 podStartE2EDuration="14.158449126s" podCreationTimestamp="2026-02-02 00:22:14 +0000 UTC" firstStartedPulling="2026-02-02 00:22:18.226863697 +0000 UTC m=+737.502360627" lastFinishedPulling="2026-02-02 00:22:27.531856741 +0000 UTC m=+746.807353671" observedRunningTime="2026-02-02 00:22:28.152673235 +0000 UTC m=+747.428170205" watchObservedRunningTime="2026-02-02 00:22:28.158449126 +0000 UTC m=+747.433946056" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.060796 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.069041 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.083215 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200446 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200594 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.200670 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.301969 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302081 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302161 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302516 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.302715 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.345354 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.428529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:30 crc kubenswrapper[5108]: I0202 00:22:30.889196 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x"] Feb 02 00:22:30 crc kubenswrapper[5108]: W0202 00:22:30.897463 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf02ca82_ac58_4944_8da6_d006cf605640.slice/crio-a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42 WatchSource:0}: Error finding container a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42: Status 404 returned error can't find the container with id a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42 Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157271 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="acb9b8d3b29f8fd43d52f5da5189aa8849e58cc27d2bfa608f35d89115d8f06d" exitCode=0 Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157396 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"acb9b8d3b29f8fd43d52f5da5189aa8849e58cc27d2bfa608f35d89115d8f06d"} Feb 02 00:22:31 crc kubenswrapper[5108]: I0202 00:22:31.157857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerStarted","Data":"a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42"} Feb 02 00:22:32 crc kubenswrapper[5108]: I0202 00:22:32.171489 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="a466db46a9799efec35e7ce18b379a2896b9217339a99b02488b81f0e5c8affe" exitCode=0 Feb 02 00:22:32 crc kubenswrapper[5108]: I0202 00:22:32.171679 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"a466db46a9799efec35e7ce18b379a2896b9217339a99b02488b81f0e5c8affe"} Feb 02 00:22:33 crc kubenswrapper[5108]: I0202 00:22:33.185281 5108 generic.go:358] "Generic (PLEG): container finished" podID="af02ca82-ac58-4944-8da6-d006cf605640" containerID="2a5f602da0b8b8e3ac79a7a7ed93ea2a25f5241caad2fd9c08e65a6bf55bcfb8" exitCode=0 Feb 02 00:22:33 crc kubenswrapper[5108]: I0202 00:22:33.185361 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"2a5f602da0b8b8e3ac79a7a7ed93ea2a25f5241caad2fd9c08e65a6bf55bcfb8"} Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.450734 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.573381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.573732 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.575056 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") pod \"af02ca82-ac58-4944-8da6-d006cf605640\" (UID: \"af02ca82-ac58-4944-8da6-d006cf605640\") " Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.576347 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle" (OuterVolumeSpecName: "bundle") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.577432 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.588786 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util" (OuterVolumeSpecName: "util") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.599071 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6" (OuterVolumeSpecName: "kube-api-access-wrpm6") pod "af02ca82-ac58-4944-8da6-d006cf605640" (UID: "af02ca82-ac58-4944-8da6-d006cf605640"). InnerVolumeSpecName "kube-api-access-wrpm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.679412 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af02ca82-ac58-4944-8da6-d006cf605640-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:34 crc kubenswrapper[5108]: I0202 00:22:34.679464 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrpm6\" (UniqueName: \"kubernetes.io/projected/af02ca82-ac58-4944-8da6-d006cf605640-kube-api-access-wrpm6\") on node \"crc\" DevicePath \"\"" Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.204133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" event={"ID":"af02ca82-ac58-4944-8da6-d006cf605640","Type":"ContainerDied","Data":"a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42"} Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.205032 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a62ed66e2f265bdb5d7922f2380879a1183923607577d1cd2dee46ea534d4c42" Feb 02 00:22:35 crc kubenswrapper[5108]: I0202 00:22:35.204445 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661hhj7x" Feb 02 00:22:36 crc kubenswrapper[5108]: I0202 00:22:36.252402 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="91781fe7-72ca-4748-8dcd-5d7d1c275472" containerName="elasticsearch" probeResult="failure" output=< Feb 02 00:22:36 crc kubenswrapper[5108]: {"timestamp": "2026-02-02T00:22:36+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 02 00:22:36 crc kubenswrapper[5108]: > Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.689145 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.689986 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690000 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690023 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="util" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690028 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="util" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690038 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="pull" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690043 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="pull" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.690143 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="af02ca82-ac58-4944-8da6-d006cf605640" containerName="extract" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.693673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.696207 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-bzxlm\"" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.711095 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.847791 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.847870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.949794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.949851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.950371 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/02251320-d565-4211-98ff-a138f7924888-runner\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:38 crc kubenswrapper[5108]: I0202 00:22:38.977611 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/02251320-d565-4211-98ff-a138f7924888-kube-api-access-fjqm2\") pod \"smart-gateway-operator-97b85656c-5f7rf\" (UID: \"02251320-d565-4211-98ff-a138f7924888\") " pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.011270 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.217021 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-5f7rf"] Feb 02 00:22:39 crc kubenswrapper[5108]: I0202 00:22:39.234857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" event={"ID":"02251320-d565-4211-98ff-a138f7924888","Type":"ContainerStarted","Data":"011902d18cacf584871509d282aa2108a1bc7261b97dcbef1079572f992ec1a7"} Feb 02 00:22:41 crc kubenswrapper[5108]: I0202 00:22:41.673732 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 02 00:22:59 crc kubenswrapper[5108]: I0202 00:22:59.441432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" event={"ID":"02251320-d565-4211-98ff-a138f7924888","Type":"ContainerStarted","Data":"63a673139938b61ed4a645e70a823e744314ec80ba8934594da501563d78a1b7"} Feb 02 00:22:59 crc kubenswrapper[5108]: I0202 00:22:59.475197 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-5f7rf" podStartSLOduration=2.043332174 podStartE2EDuration="21.475171275s" podCreationTimestamp="2026-02-02 00:22:38 +0000 UTC" firstStartedPulling="2026-02-02 00:22:39.224351498 +0000 UTC m=+758.499848418" lastFinishedPulling="2026-02-02 00:22:58.656190589 +0000 UTC m=+777.931687519" observedRunningTime="2026-02-02 00:22:59.464926515 +0000 UTC m=+778.740423485" watchObservedRunningTime="2026-02-02 00:22:59.475171275 +0000 UTC m=+778.750668245" Feb 02 00:23:02 crc kubenswrapper[5108]: I0202 00:23:02.093151 5108 scope.go:117] "RemoveContainer" containerID="b0d175fd10d4619cf043b11fd6ec6f1927ee4a1ffad44abf1e805ecf0fef43df" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.649274 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.655154 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.657878 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.660203 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.720962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822464 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822544 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.822572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.823120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.823453 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.846963 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbrgn\" (UniqueName: \"kubernetes.io/projected/776f0747-5ab3-4ca4-9437-caf3e9c10f6f-kube-api-access-gbrgn\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"776f0747-5ab3-4ca4-9437-caf3e9c10f6f\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:16 crc kubenswrapper[5108]: I0202 00:23:16.974747 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 02 00:23:17 crc kubenswrapper[5108]: I0202 00:23:17.423188 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 02 00:23:17 crc kubenswrapper[5108]: W0202 00:23:17.426934 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod776f0747_5ab3_4ca4_9437_caf3e9c10f6f.slice/crio-f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c WatchSource:0}: Error finding container f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c: Status 404 returned error can't find the container with id f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c Feb 02 00:23:17 crc kubenswrapper[5108]: I0202 00:23:17.564302 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerStarted","Data":"f4b12c879aed9ef337bd6d97652ae99b6a6db468c34936882cb5afb78b8cee7c"} Feb 02 00:23:18 crc kubenswrapper[5108]: I0202 00:23:18.573331 5108 generic.go:358] "Generic (PLEG): container finished" podID="776f0747-5ab3-4ca4-9437-caf3e9c10f6f" containerID="c7fbe5a6b7bb919b31b754e9af1147639d57ec3eb42ef023dd94a95b29b16577" exitCode=0 Feb 02 00:23:18 crc kubenswrapper[5108]: I0202 00:23:18.573448 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerDied","Data":"c7fbe5a6b7bb919b31b754e9af1147639d57ec3eb42ef023dd94a95b29b16577"} Feb 02 00:23:20 crc kubenswrapper[5108]: I0202 00:23:20.587435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"776f0747-5ab3-4ca4-9437-caf3e9c10f6f","Type":"ContainerStarted","Data":"f2120de6853b36bc7f4be377d57c0c2c549989781901e60e60b1fb9ea44b829b"} Feb 02 00:23:20 crc kubenswrapper[5108]: I0202 00:23:20.606419 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=3.015730394 podStartE2EDuration="4.60639798s" podCreationTimestamp="2026-02-02 00:23:16 +0000 UTC" firstStartedPulling="2026-02-02 00:23:18.57447153 +0000 UTC m=+797.849968460" lastFinishedPulling="2026-02-02 00:23:20.165139086 +0000 UTC m=+799.440636046" observedRunningTime="2026-02-02 00:23:20.603849686 +0000 UTC m=+799.879346626" watchObservedRunningTime="2026-02-02 00:23:20.60639798 +0000 UTC m=+799.881894910" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.270271 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.278399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.280925 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.283740 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.427829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.428196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.428615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.529563 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.529970 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.530112 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.530752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.531445 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.550218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:23 crc kubenswrapper[5108]: I0202 00:23:23.609434 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.028107 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.047642 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.117311 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.117441 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.240907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342528 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.342655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.343462 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.344311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.371950 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.435624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.617985 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="708c0c30cb2dbe9d2b8f4e0cd80d2d367038e08c79f36cdc11388a5b843dd106" exitCode=0 Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.618460 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"708c0c30cb2dbe9d2b8f4e0cd80d2d367038e08c79f36cdc11388a5b843dd106"} Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.618506 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerStarted","Data":"2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01"} Feb 02 00:23:24 crc kubenswrapper[5108]: I0202 00:23:24.641354 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt"] Feb 02 00:23:25 crc kubenswrapper[5108]: I0202 00:23:25.642018 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerStarted","Data":"f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7"} Feb 02 00:23:25 crc kubenswrapper[5108]: I0202 00:23:25.642381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerStarted","Data":"68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe"} Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.650713 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="7bcfad5d1f49488310c1f60a8b396e30f0d8ccd3d43caab6215d2fbdcbc9ee34" exitCode=0 Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.650898 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"7bcfad5d1f49488310c1f60a8b396e30f0d8ccd3d43caab6215d2fbdcbc9ee34"} Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.653193 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7" exitCode=0 Feb 02 00:23:26 crc kubenswrapper[5108]: I0202 00:23:26.653460 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"f0393a7f43db4500070cc032f904361e4e7af3460a98b1a300595e380d5b31c7"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.004098 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.008800 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.021337 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.180913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.180989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.181034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.282871 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.283427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.283530 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.305925 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"redhat-operators-jmpmn\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.329686 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.548399 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:27 crc kubenswrapper[5108]: W0202 00:23:27.626306 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3421ef38_8f4b_4f32_9305_3aa037a2f474.slice/crio-cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e WatchSource:0}: Error finding container cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e: Status 404 returned error can't find the container with id cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.673540 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.677619 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="d93d2cacedcb9b871190b5561ecd48cbb9031d9229506a12beb76097c34e221f" exitCode=0 Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.677859 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"d93d2cacedcb9b871190b5561ecd48cbb9031d9229506a12beb76097c34e221f"} Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.693780 5108 generic.go:358] "Generic (PLEG): container finished" podID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerID="350095329effad337d6dbbcaa6e9126971ccc0224cbb1c43dcc6d9550d2960a7" exitCode=0 Feb 02 00:23:27 crc kubenswrapper[5108]: I0202 00:23:27.694035 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"350095329effad337d6dbbcaa6e9126971ccc0224cbb1c43dcc6d9550d2960a7"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.703770 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerID="882c9c001cce1aef356d3d8973567eeaf86bf006c81ad45d70ef7856832e09cb" exitCode=0 Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.703810 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"882c9c001cce1aef356d3d8973567eeaf86bf006c81ad45d70ef7856832e09cb"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.705303 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655" exitCode=0 Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.705631 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655"} Feb 02 00:23:28 crc kubenswrapper[5108]: I0202 00:23:28.956171 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107805 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107934 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.107976 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") pod \"09f8289b-76c1-4e9d-9878-88f41e0289df\" (UID: \"09f8289b-76c1-4e9d-9878-88f41e0289df\") " Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.109110 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle" (OuterVolumeSpecName: "bundle") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.117790 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56" (OuterVolumeSpecName: "kube-api-access-brm56") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "kube-api-access-brm56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.120392 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util" (OuterVolumeSpecName: "util") pod "09f8289b-76c1-4e9d-9878-88f41e0289df" (UID: "09f8289b-76c1-4e9d-9878-88f41e0289df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210006 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210057 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/09f8289b-76c1-4e9d-9878-88f41e0289df-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.210075 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brm56\" (UniqueName: \"kubernetes.io/projected/09f8289b-76c1-4e9d-9878-88f41e0289df-kube-api-access-brm56\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.716208 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394"} Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.719677 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.720176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9" event={"ID":"09f8289b-76c1-4e9d-9878-88f41e0289df","Type":"ContainerDied","Data":"2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01"} Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.720208 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2385e8aeff0016640a9fc886b1e2186ae6b1902e8fbc72c5da6b73b443156b01" Feb 02 00:23:29 crc kubenswrapper[5108]: I0202 00:23:29.926912 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.020976 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021102 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021123 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") pod \"0b9c2624-6584-48ce-9b40-5f866de6d896\" (UID: \"0b9c2624-6584-48ce-9b40-5f866de6d896\") " Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.021902 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle" (OuterVolumeSpecName: "bundle") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.034591 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util" (OuterVolumeSpecName: "util") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.040208 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr" (OuterVolumeSpecName: "kube-api-access-rhqhr") pod "0b9c2624-6584-48ce-9b40-5f866de6d896" (UID: "0b9c2624-6584-48ce-9b40-5f866de6d896"). InnerVolumeSpecName "kube-api-access-rhqhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122728 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-util\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122756 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b9c2624-6584-48ce-9b40-5f866de6d896-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.122765 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhqhr\" (UniqueName: \"kubernetes.io/projected/0b9c2624-6584-48ce-9b40-5f866de6d896-kube-api-access-rhqhr\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.729123 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394" exitCode=0 Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.729211 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394"} Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737125 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" event={"ID":"0b9c2624-6584-48ce-9b40-5f866de6d896","Type":"ContainerDied","Data":"68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe"} Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737167 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e90b91198c229e0af2107143436301ec4b686e48ec5d15fb81ef4ed2103fbe" Feb 02 00:23:30 crc kubenswrapper[5108]: I0202 00:23:30.737277 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p6kkt" Feb 02 00:23:31 crc kubenswrapper[5108]: I0202 00:23:31.744640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerStarted","Data":"7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398"} Feb 02 00:23:31 crc kubenswrapper[5108]: I0202 00:23:31.765204 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmpmn" podStartSLOduration=5.113270032 podStartE2EDuration="5.76518662s" podCreationTimestamp="2026-02-02 00:23:26 +0000 UTC" firstStartedPulling="2026-02-02 00:23:28.706142232 +0000 UTC m=+807.981639152" lastFinishedPulling="2026-02-02 00:23:29.35805881 +0000 UTC m=+808.633555740" observedRunningTime="2026-02-02 00:23:31.76283532 +0000 UTC m=+811.038332270" watchObservedRunningTime="2026-02-02 00:23:31.76518662 +0000 UTC m=+811.040683550" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.330904 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.331302 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.393165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.845105 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.952670 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953525 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953547 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953564 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953571 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953591 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953598 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="pull" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953609 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953614 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953626 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953632 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953647 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953653 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="util" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953794 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b9c2624-6584-48ce-9b40-5f866de6d896" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.953817 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="09f8289b-76c1-4e9d-9878-88f41e0289df" containerName="extract" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.963247 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.965682 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-p4gtg\"" Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.968465 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:37 crc kubenswrapper[5108]: I0202 00:23:37.982630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.084110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.113215 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdlj9\" (UniqueName: \"kubernetes.io/projected/3ea9b720-173a-450f-8359-555796dc329f-kube-api-access-rdlj9\") pod \"interconnect-operator-78b9bd8798-7r9xw\" (UID: \"3ea9b720-173a-450f-8359-555796dc329f\") " pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.276901 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.514074 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-7r9xw"] Feb 02 00:23:38 crc kubenswrapper[5108]: I0202 00:23:38.798288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" event={"ID":"3ea9b720-173a-450f-8359-555796dc329f","Type":"ContainerStarted","Data":"8eddc6b7ff54eb81c4e93a6993467b1be7c9ba29f7194b10d71d6125d75d691d"} Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.281680 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.527709 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.527845 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.530104 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-fkjnl\"" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.601868 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.602154 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmpmn" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" containerID="cri-o://7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" gracePeriod=2 Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.627725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.627855 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.729465 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.729906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.730007 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/1c4a2dde-667e-45e3-8d53-9219bcfd2214-runner\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.760818 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1c4a2dde-667e-45e3-8d53-9219bcfd2214-kube-api-access-82z6r\") pod \"service-telemetry-operator-794b5697c7-6gtwj\" (UID: \"1c4a2dde-667e-45e3-8d53-9219bcfd2214\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.823654 5108 generic.go:358] "Generic (PLEG): container finished" podID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerID="7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" exitCode=0 Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.824078 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398"} Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.852187 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" Feb 02 00:23:40 crc kubenswrapper[5108]: I0202 00:23:40.970114 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.034970 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.035284 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.035477 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") pod \"3421ef38-8f4b-4f32-9305-3aa037a2f474\" (UID: \"3421ef38-8f4b-4f32-9305-3aa037a2f474\") " Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.036558 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities" (OuterVolumeSpecName: "utilities") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.041693 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn" (OuterVolumeSpecName: "kube-api-access-6dtjn") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "kube-api-access-6dtjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.118175 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-6gtwj"] Feb 02 00:23:41 crc kubenswrapper[5108]: W0202 00:23:41.130243 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c4a2dde_667e_45e3_8d53_9219bcfd2214.slice/crio-7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd WatchSource:0}: Error finding container 7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd: Status 404 returned error can't find the container with id 7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.143269 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.143298 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dtjn\" (UniqueName: \"kubernetes.io/projected/3421ef38-8f4b-4f32-9305-3aa037a2f474-kube-api-access-6dtjn\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.156247 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3421ef38-8f4b-4f32-9305-3aa037a2f474" (UID: "3421ef38-8f4b-4f32-9305-3aa037a2f474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.244992 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3421ef38-8f4b-4f32-9305-3aa037a2f474-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.835515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" event={"ID":"1c4a2dde-667e-45e3-8d53-9219bcfd2214","Type":"ContainerStarted","Data":"7a87d5bfbc3cab366004772d683a406553429a92615bb049e70a4c42f429cfdd"} Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843507 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmpmn" event={"ID":"3421ef38-8f4b-4f32-9305-3aa037a2f474","Type":"ContainerDied","Data":"cc80ca44c8d7d85c9c58e8d7c8d39e4969cb73287bbb4ba43d998c06499e673e"} Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843572 5108 scope.go:117] "RemoveContainer" containerID="7fc706bcab6af73d9ba0a9a7620b155fe61d7986b2d16ec5c61188720ace2398" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.843765 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmpmn" Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.874514 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:41 crc kubenswrapper[5108]: I0202 00:23:41.882481 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmpmn"] Feb 02 00:23:43 crc kubenswrapper[5108]: I0202 00:23:43.570111 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" path="/var/lib/kubelet/pods/3421ef38-8f4b-4f32-9305-3aa037a2f474/volumes" Feb 02 00:23:45 crc kubenswrapper[5108]: I0202 00:23:45.665097 5108 scope.go:117] "RemoveContainer" containerID="6bdd2026306d17209ef054fa2900fb6f5744892f6addca0b14a3d700e1cd1394" Feb 02 00:23:45 crc kubenswrapper[5108]: I0202 00:23:45.724545 5108 scope.go:117] "RemoveContainer" containerID="cd91e900875a1d2348a55c6e5c86785cf399e66b88e106b4dd590563e0ece655" Feb 02 00:23:46 crc kubenswrapper[5108]: I0202 00:23:46.929682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" event={"ID":"3ea9b720-173a-450f-8359-555796dc329f","Type":"ContainerStarted","Data":"2cc4e47a14e9d721551cf58d75a2e19d77b1eea60175f8eb66445f0ecc31f982"} Feb 02 00:23:46 crc kubenswrapper[5108]: I0202 00:23:46.955221 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-7r9xw" podStartSLOduration=2.697221897 podStartE2EDuration="9.955203375s" podCreationTimestamp="2026-02-02 00:23:37 +0000 UTC" firstStartedPulling="2026-02-02 00:23:38.516718208 +0000 UTC m=+817.792215138" lastFinishedPulling="2026-02-02 00:23:45.774699676 +0000 UTC m=+825.050196616" observedRunningTime="2026-02-02 00:23:46.952753463 +0000 UTC m=+826.228250473" watchObservedRunningTime="2026-02-02 00:23:46.955203375 +0000 UTC m=+826.230700305" Feb 02 00:23:53 crc kubenswrapper[5108]: I0202 00:23:53.003892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" event={"ID":"1c4a2dde-667e-45e3-8d53-9219bcfd2214","Type":"ContainerStarted","Data":"1bf324fbd7d3f1961d09f7bce6af69dc46d35ed66152ea01ff2b756d0862b6e9"} Feb 02 00:23:53 crc kubenswrapper[5108]: I0202 00:23:53.031872 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-6gtwj" podStartSLOduration=1.5056129870000001 podStartE2EDuration="13.031846651s" podCreationTimestamp="2026-02-02 00:23:40 +0000 UTC" firstStartedPulling="2026-02-02 00:23:41.134113592 +0000 UTC m=+820.409610522" lastFinishedPulling="2026-02-02 00:23:52.660347256 +0000 UTC m=+831.935844186" observedRunningTime="2026-02-02 00:23:53.023666703 +0000 UTC m=+832.299163623" watchObservedRunningTime="2026-02-02 00:23:53.031846651 +0000 UTC m=+832.307343621" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.137784 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139256 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139291 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139313 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-content" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139328 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-content" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139406 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-utilities" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139419 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="extract-utilities" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.139592 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3421ef38-8f4b-4f32-9305-3aa037a2f474" containerName="registry-server" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.147200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.147365 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.149754 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.149904 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.151739 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.255745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.357177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.376253 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"auto-csr-approver-29499864-pnc7n\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.466758 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:00 crc kubenswrapper[5108]: I0202 00:24:00.713401 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:24:01 crc kubenswrapper[5108]: I0202 00:24:01.067241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerStarted","Data":"a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623"} Feb 02 00:24:02 crc kubenswrapper[5108]: I0202 00:24:02.078033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerStarted","Data":"998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874"} Feb 02 00:24:02 crc kubenswrapper[5108]: I0202 00:24:02.093990 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" podStartSLOduration=1.2239795980000001 podStartE2EDuration="2.093966789s" podCreationTimestamp="2026-02-02 00:24:00 +0000 UTC" firstStartedPulling="2026-02-02 00:24:00.737639409 +0000 UTC m=+840.013136369" lastFinishedPulling="2026-02-02 00:24:01.6076266 +0000 UTC m=+840.883123560" observedRunningTime="2026-02-02 00:24:02.092748619 +0000 UTC m=+841.368245559" watchObservedRunningTime="2026-02-02 00:24:02.093966789 +0000 UTC m=+841.369463729" Feb 02 00:24:03 crc kubenswrapper[5108]: I0202 00:24:03.104045 5108 generic.go:358] "Generic (PLEG): container finished" podID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerID="998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874" exitCode=0 Feb 02 00:24:03 crc kubenswrapper[5108]: I0202 00:24:03.104178 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerDied","Data":"998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874"} Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.437112 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.521357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") pod \"085299b1-a0db-40df-ab74-d8bf934d61bc\" (UID: \"085299b1-a0db-40df-ab74-d8bf934d61bc\") " Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.535439 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr" (OuterVolumeSpecName: "kube-api-access-zf9gr") pod "085299b1-a0db-40df-ab74-d8bf934d61bc" (UID: "085299b1-a0db-40df-ab74-d8bf934d61bc"). InnerVolumeSpecName "kube-api-access-zf9gr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.622913 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf9gr\" (UniqueName: \"kubernetes.io/projected/085299b1-a0db-40df-ab74-d8bf934d61bc-kube-api-access-zf9gr\") on node \"crc\" DevicePath \"\"" Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.670127 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:24:04 crc kubenswrapper[5108]: I0202 00:24:04.678870 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499858-dzzxv"] Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122107 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" event={"ID":"085299b1-a0db-40df-ab74-d8bf934d61bc","Type":"ContainerDied","Data":"a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623"} Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122171 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01de391b5cd6a122a36f19cff054fa668a0bc7266f343b71c5faa6068ff2623" Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.122184 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499864-pnc7n" Feb 02 00:24:05 crc kubenswrapper[5108]: I0202 00:24:05.565407 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431bfb08-11a6-4c66-893c-650ea32d97b3" path="/var/lib/kubelet/pods/431bfb08-11a6-4c66-893c-650ea32d97b3/volumes" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.672078 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673769 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673789 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.673986 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" containerName="oc" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.678927 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.682890 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.683301 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.683537 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.686778 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687105 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687337 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-mxfv9\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.687520 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.695390 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751456 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751676 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.751698 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853594 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853665 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853760 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.853783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.854861 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863032 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863049 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.863250 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.872031 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:13 crc kubenswrapper[5108]: I0202 00:24:13.875511 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xsgkr\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:14 crc kubenswrapper[5108]: I0202 00:24:14.001173 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:24:14 crc kubenswrapper[5108]: I0202 00:24:14.211999 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:24:15 crc kubenswrapper[5108]: I0202 00:24:15.199379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerStarted","Data":"be460dd189cbfc5a2a37f3ba1e3bf4c61862c2876dd659904fe0292f2bbf5517"} Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.236307 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerStarted","Data":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.275473 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" podStartSLOduration=2.4035567970000002 podStartE2EDuration="7.275432395s" podCreationTimestamp="2026-02-02 00:24:13 +0000 UTC" firstStartedPulling="2026-02-02 00:24:14.222457188 +0000 UTC m=+853.497954128" lastFinishedPulling="2026-02-02 00:24:19.094332796 +0000 UTC m=+858.369829726" observedRunningTime="2026-02-02 00:24:20.269796751 +0000 UTC m=+859.545293691" watchObservedRunningTime="2026-02-02 00:24:20.275432395 +0000 UTC m=+859.550929365" Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.920114 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:24:20 crc kubenswrapper[5108]: I0202 00:24:20.920200 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.128438 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.607397 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.608573 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.612637 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.612874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-9578k\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613013 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613272 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613358 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613431 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.613756 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.614681 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.615795 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718340 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718391 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718423 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718464 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718642 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718748 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.718921 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719432 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.719978 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.821898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822064 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822167 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822197 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.822410 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.824465 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: E0202 00:24:24.824644 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 02 00:24:24 crc kubenswrapper[5108]: E0202 00:24:24.824749 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls podName:3180ec82-70eb-4837-9eed-a92e41e5e3fc nodeName:}" failed. No retries permitted until 2026-02-02 00:24:25.324726425 +0000 UTC m=+864.600223365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3180ec82-70eb-4837-9eed-a92e41e5e3fc") : secret "default-prometheus-proxy-tls" not found Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.825458 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.825827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.826042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3180ec82-70eb-4837-9eed-a92e41e5e3fc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.829970 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.830016 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f745048d2e71c93a548e077a7ba1794f9de151f8f7067605ba7384d3e5bae71c/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.833364 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-tls-assets\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.833640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-web-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.835691 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.837655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3180ec82-70eb-4837-9eed-a92e41e5e3fc-config-out\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.845009 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.847745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brkjv\" (UniqueName: \"kubernetes.io/projected/3180ec82-70eb-4837-9eed-a92e41e5e3fc-kube-api-access-brkjv\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:24 crc kubenswrapper[5108]: I0202 00:24:24.876716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3bc2118c-5552-46ec-b7f6-a48561e94293\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:25 crc kubenswrapper[5108]: I0202 00:24:25.330130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:25 crc kubenswrapper[5108]: E0202 00:24:25.330313 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 02 00:24:25 crc kubenswrapper[5108]: E0202 00:24:25.330390 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls podName:3180ec82-70eb-4837-9eed-a92e41e5e3fc nodeName:}" failed. No retries permitted until 2026-02-02 00:24:26.330374662 +0000 UTC m=+865.605871582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3180ec82-70eb-4837-9eed-a92e41e5e3fc") : secret "default-prometheus-proxy-tls" not found Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.346248 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.351827 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3180ec82-70eb-4837-9eed-a92e41e5e3fc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3180ec82-70eb-4837-9eed-a92e41e5e3fc\") " pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.445067 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 02 00:24:26 crc kubenswrapper[5108]: I0202 00:24:26.747261 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 02 00:24:27 crc kubenswrapper[5108]: I0202 00:24:27.297607 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"21816529956a895b1886c1da9681b3ad3a8c8ec009f5864512f2da090fdc8af4"} Feb 02 00:24:30 crc kubenswrapper[5108]: I0202 00:24:30.329995 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10"} Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.514162 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.521394 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.525263 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.567939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.669434 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.691348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w46v\" (UniqueName: \"kubernetes.io/projected/4431ddda-6bd1-43de-8d6e-c5829580e15e-kube-api-access-2w46v\") pod \"default-snmp-webhook-6774d8dfbc-sfrh8\" (UID: \"4431ddda-6bd1-43de-8d6e-c5829580e15e\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:34 crc kubenswrapper[5108]: I0202 00:24:34.836345 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" Feb 02 00:24:35 crc kubenswrapper[5108]: I0202 00:24:35.281877 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8"] Feb 02 00:24:35 crc kubenswrapper[5108]: I0202 00:24:35.380938 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" event={"ID":"4431ddda-6bd1-43de-8d6e-c5829580e15e","Type":"ContainerStarted","Data":"0ed4782617619d2edca6daab3f32582a23837d60d35902016d6ae1f93645a7f5"} Feb 02 00:24:37 crc kubenswrapper[5108]: I0202 00:24:37.400655 5108 generic.go:358] "Generic (PLEG): container finished" podID="3180ec82-70eb-4837-9eed-a92e41e5e3fc" containerID="241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10" exitCode=0 Feb 02 00:24:37 crc kubenswrapper[5108]: I0202 00:24:37.400764 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerDied","Data":"241c26cd2a74392762363fb6bdfd7db40fcbd0e3c90a3a038e12d62ada2fcf10"} Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.429553 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.465961 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.466218 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469004 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469254 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469399 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469475 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469409 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.469617 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-76qhb\"" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530797 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530927 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530943 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530962 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.530977 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.531009 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.531048 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.627568 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634694 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634791 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634811 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.634846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: E0202 00:24:38.647404 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:38 crc kubenswrapper[5108]: E0202 00:24:38.647624 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:39.147598101 +0000 UTC m=+878.423095031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.648471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-web-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.650376 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.650454 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-config-volume\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.651151 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.651327 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.653895 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.663863 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.663903 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/37e7f4321b567342cda29f8152351e56127ff3b7d1ccfdb5a5304f7e4517adc3/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.664843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xklx\" (UniqueName: \"kubernetes.io/projected/6d411794-541c-4416-bd08-cd4f26bc73cb-kube-api-access-4xklx\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.666698 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6d411794-541c-4416-bd08-cd4f26bc73cb-config-out\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.668128 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.696042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c413c7b7-d12d-416e-978e-be9c69abf3d8\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736380 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736431 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.736482 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838261 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.838454 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.839204 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.839281 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:38 crc kubenswrapper[5108]: I0202 00:24:38.860386 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"certified-operators-h8vl8\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:39 crc kubenswrapper[5108]: I0202 00:24:39.033285 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:39 crc kubenswrapper[5108]: I0202 00:24:39.244800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:39 crc kubenswrapper[5108]: E0202 00:24:39.245018 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:39 crc kubenswrapper[5108]: E0202 00:24:39.245152 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:40.245130663 +0000 UTC m=+879.520627593 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:40 crc kubenswrapper[5108]: I0202 00:24:40.260223 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:40 crc kubenswrapper[5108]: E0202 00:24:40.260424 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:40 crc kubenswrapper[5108]: E0202 00:24:40.260550 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls podName:6d411794-541c-4416-bd08-cd4f26bc73cb nodeName:}" failed. No retries permitted until 2026-02-02 00:24:42.260525979 +0000 UTC m=+881.536022969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6d411794-541c-4416-bd08-cd4f26bc73cb") : secret "default-alertmanager-proxy-tls" not found Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.289766 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.303211 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d411794-541c-4416-bd08-cd4f26bc73cb-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6d411794-541c-4416-bd08-cd4f26bc73cb\") " pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:42 crc kubenswrapper[5108]: I0202 00:24:42.385756 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 02 00:24:43 crc kubenswrapper[5108]: I0202 00:24:43.467564 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:24:43 crc kubenswrapper[5108]: I0202 00:24:43.507782 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 02 00:24:43 crc kubenswrapper[5108]: W0202 00:24:43.570310 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43972ad_8935_44fe_a3cb_4ae69a48b27a.slice/crio-144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f WatchSource:0}: Error finding container 144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f: Status 404 returned error can't find the container with id 144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f Feb 02 00:24:43 crc kubenswrapper[5108]: W0202 00:24:43.572648 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d411794_541c_4416_bd08_cd4f26bc73cb.slice/crio-11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7 WatchSource:0}: Error finding container 11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7: Status 404 returned error can't find the container with id 11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7 Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450138 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb" exitCode=0 Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450287 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.450531 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerStarted","Data":"144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.455359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" event={"ID":"4431ddda-6bd1-43de-8d6e-c5829580e15e","Type":"ContainerStarted","Data":"1881156c147b2c41cd6c0479734786a8b860c9cc836037ab9327d23883e7a18f"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.456777 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"11f2a58166965e469897b297e5b9503921b3d24fbde35fcc149456fdf2295ca7"} Feb 02 00:24:44 crc kubenswrapper[5108]: I0202 00:24:44.486973 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-sfrh8" podStartSLOduration=2.504582438 podStartE2EDuration="10.486958142s" podCreationTimestamp="2026-02-02 00:24:34 +0000 UTC" firstStartedPulling="2026-02-02 00:24:35.296638494 +0000 UTC m=+874.572135424" lastFinishedPulling="2026-02-02 00:24:43.279014198 +0000 UTC m=+882.554511128" observedRunningTime="2026-02-02 00:24:44.48250242 +0000 UTC m=+883.757999350" watchObservedRunningTime="2026-02-02 00:24:44.486958142 +0000 UTC m=+883.762455072" Feb 02 00:24:45 crc kubenswrapper[5108]: I0202 00:24:45.465823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58"} Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.669243 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.677497 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.681499 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765459 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.765794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867127 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.867211 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.868066 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.868368 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:47 crc kubenswrapper[5108]: I0202 00:24:47.883991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"community-operators-4xj84\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.066250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.343478 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.492271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerStarted","Data":"ed59f3102a80ca4b5a1d7c10be89cb344fd7a76759d5c3c7818e734032b6f019"} Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.495212 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4" exitCode=0 Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.495264 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4"} Feb 02 00:24:48 crc kubenswrapper[5108]: I0202 00:24:48.498750 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"c042ab2ed34c7be32865470364274d03a9e7b7842d9354a7980bc87c6a237a84"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.506541 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19" exitCode=0 Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.506594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.512775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerStarted","Data":"27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0"} Feb 02 00:24:49 crc kubenswrapper[5108]: I0202 00:24:49.546030 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8vl8" podStartSLOduration=8.414851688 podStartE2EDuration="11.546012881s" podCreationTimestamp="2026-02-02 00:24:38 +0000 UTC" firstStartedPulling="2026-02-02 00:24:44.450935046 +0000 UTC m=+883.726431976" lastFinishedPulling="2026-02-02 00:24:47.582096229 +0000 UTC m=+886.857593169" observedRunningTime="2026-02-02 00:24:49.540059358 +0000 UTC m=+888.815556308" watchObservedRunningTime="2026-02-02 00:24:49.546012881 +0000 UTC m=+888.821509811" Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.534064 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"0b478e30864ece92a76348a619a09232cd0dc6be617f1ff16f5fbab47f0733d4"} Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.918914 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:24:50 crc kubenswrapper[5108]: I0202 00:24:50.919598 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.544541 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb" exitCode=0 Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.544640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb"} Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.807763 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.820639 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.820779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.823737 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.823813 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.824299 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-vnkgz\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.835484 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932145 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932378 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:51 crc kubenswrapper[5108]: I0202 00:24:51.932601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034509 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.034615 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.035775 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/effd2c87-a358-47ac-869d-e9b26a40cb11-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.036604 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.036714 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls podName:effd2c87-a358-47ac-869d-e9b26a40cb11 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:52.536688327 +0000 UTC m=+891.812185317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-2fppp" (UID: "effd2c87-a358-47ac-869d-e9b26a40cb11") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.037787 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/effd2c87-a358-47ac-869d-e9b26a40cb11-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.041898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.055636 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj2jz\" (UniqueName: \"kubernetes.io/projected/effd2c87-a358-47ac-869d-e9b26a40cb11-kube-api-access-gj2jz\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.541015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.541243 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: E0202 00:24:52.541347 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls podName:effd2c87-a358-47ac-869d-e9b26a40cb11 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:53.541319906 +0000 UTC m=+892.816816866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-2fppp" (UID: "effd2c87-a358-47ac-869d-e9b26a40cb11") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.555436 5108 generic.go:358] "Generic (PLEG): container finished" podID="6d411794-541c-4416-bd08-cd4f26bc73cb" containerID="894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58" exitCode=0 Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.555535 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerDied","Data":"894dc7cb45e63e8f24935dbff4b899be81fc89008187602b3aa77cf89c213a58"} Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.561262 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerStarted","Data":"b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499"} Feb 02 00:24:52 crc kubenswrapper[5108]: I0202 00:24:52.607652 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xj84" podStartSLOduration=4.6414505120000005 podStartE2EDuration="5.607630861s" podCreationTimestamp="2026-02-02 00:24:47 +0000 UTC" firstStartedPulling="2026-02-02 00:24:49.507508067 +0000 UTC m=+888.783004997" lastFinishedPulling="2026-02-02 00:24:50.473688376 +0000 UTC m=+889.749185346" observedRunningTime="2026-02-02 00:24:52.602947423 +0000 UTC m=+891.878444353" watchObservedRunningTime="2026-02-02 00:24:52.607630861 +0000 UTC m=+891.883127791" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.555944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.568533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/effd2c87-a358-47ac-869d-e9b26a40cb11-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-2fppp\" (UID: \"effd2c87-a358-47ac-869d-e9b26a40cb11\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:53 crc kubenswrapper[5108]: I0202 00:24:53.637801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.651316 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.700408 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.700653 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.704698 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.711725 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896390 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.896593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998051 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998097 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998125 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998146 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.998216 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.999126 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9fccb2ea-b40e-4375-81bf-1bedc36fd526-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:54 crc kubenswrapper[5108]: I0202 00:24:54.999431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9fccb2ea-b40e-4375-81bf-1bedc36fd526-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.000429 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.000505 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls podName:9fccb2ea-b40e-4375-81bf-1bedc36fd526 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:55.50048995 +0000 UTC m=+894.775986890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" (UID: "9fccb2ea-b40e-4375-81bf-1bedc36fd526") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.005472 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.031924 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfmw\" (UniqueName: \"kubernetes.io/projected/9fccb2ea-b40e-4375-81bf-1bedc36fd526-kube-api-access-tnfmw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: I0202 00:24:55.506324 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.506678 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:55 crc kubenswrapper[5108]: E0202 00:24:55.506842 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls podName:9fccb2ea-b40e-4375-81bf-1bedc36fd526 nodeName:}" failed. No retries permitted until 2026-02-02 00:24:56.506810636 +0000 UTC m=+895.782307566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" (UID: "9fccb2ea-b40e-4375-81bf-1bedc36fd526") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.524688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.532321 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9fccb2ea-b40e-4375-81bf-1bedc36fd526-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k\" (UID: \"9fccb2ea-b40e-4375-81bf-1bedc36fd526\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:56 crc kubenswrapper[5108]: I0202 00:24:56.829551 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.066439 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.067773 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.136936 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.582888 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp"] Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.609462 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k"] Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.671864 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:24:58 crc kubenswrapper[5108]: I0202 00:24:58.729784 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:24:58 crc kubenswrapper[5108]: W0202 00:24:58.948046 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeffd2c87_a358_47ac_869d_e9b26a40cb11.slice/crio-bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82 WatchSource:0}: Error finding container bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82: Status 404 returned error can't find the container with id bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82 Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.033386 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.033480 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.078387 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:24:59 crc kubenswrapper[5108]: I0202 00:24:59.542321 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.922926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"07fd0b579f2088cc2eed006074ff62a37311f5d1c4dcda24d9af854d6be0e53c"} Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.923327 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.923405 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"bd66e511c09b936b12a46c73d0fdbc272762b17802192333046059f1bbf07a82"} Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.924127 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.928486 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.932601 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.982824 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989751 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:00 crc kubenswrapper[5108]: I0202 00:25:00.989909 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.091502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.091848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092205 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.092424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.093348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/095466f0-3dfb-4daf-809c-188de8da2ee9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.093614 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/095466f0-3dfb-4daf-809c-188de8da2ee9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.100378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.100853 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/095466f0-3dfb-4daf-809c-188de8da2ee9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.110882 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsf5h\" (UniqueName: \"kubernetes.io/projected/095466f0-3dfb-4daf-809c-188de8da2ee9-kube-api-access-wsf5h\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4\" (UID: \"095466f0-3dfb-4daf-809c-188de8da2ee9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.247254 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.645513 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xj84" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" containerID="cri-o://b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" gracePeriod=2 Feb 02 00:25:01 crc kubenswrapper[5108]: I0202 00:25:01.794757 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.243445 5108 scope.go:117] "RemoveContainer" containerID="ff61ff81d7abb5723358d9eb219b89d933545279f212b14a8a7b31b99a0fd8b3" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.370664 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.371507 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.390936 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.391710 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.653460 5108 generic.go:358] "Generic (PLEG): container finished" podID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerID="b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" exitCode=0 Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.653554 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499"} Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.654212 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8vl8" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" containerID="cri-o://27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" gracePeriod=2 Feb 02 00:25:02 crc kubenswrapper[5108]: I0202 00:25:02.659107 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4"] Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.663720 5108 generic.go:358] "Generic (PLEG): container finished" podID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerID="27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" exitCode=0 Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.663778 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0"} Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.666334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"796252c50d779923668248d528a631e06b4dc9dac627170e9a8bc66a407054a6"} Feb 02 00:25:03 crc kubenswrapper[5108]: I0202 00:25:03.996525 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064481 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.064851 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") pod \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\" (UID: \"b43972ad-8935-44fe-a3cb-4ae69a48b27a\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.066797 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities" (OuterVolumeSpecName: "utilities") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.083043 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86" (OuterVolumeSpecName: "kube-api-access-hrt86") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "kube-api-access-hrt86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.111065 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b43972ad-8935-44fe-a3cb-4ae69a48b27a" (UID: "b43972ad-8935-44fe-a3cb-4ae69a48b27a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166511 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrt86\" (UniqueName: \"kubernetes.io/projected/b43972ad-8935-44fe-a3cb-4ae69a48b27a-kube-api-access-hrt86\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166548 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.166561 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43972ad-8935-44fe-a3cb-4ae69a48b27a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.535949 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675428 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675476 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xj84" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675498 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") pod \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\" (UID: \"da4c12ea-9e45-4b71-9f9a-565c93d8520f\") " Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675477 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xj84" event={"ID":"da4c12ea-9e45-4b71-9f9a-565c93d8520f","Type":"ContainerDied","Data":"ed59f3102a80ca4b5a1d7c10be89cb344fd7a76759d5c3c7818e734032b6f019"} Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.675964 5108 scope.go:117] "RemoveContainer" containerID="b62dad9325f662f5d1c0f96bbd9b470ceb240033582f27bd1a3313244689f499" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.676185 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities" (OuterVolumeSpecName: "utilities") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.682269 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk" (OuterVolumeSpecName: "kube-api-access-hkqxk") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "kube-api-access-hkqxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.687658 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8vl8" event={"ID":"b43972ad-8935-44fe-a3cb-4ae69a48b27a","Type":"ContainerDied","Data":"144daf304af88934751889a69e636b73a0f5991ae80aef024d91db8efa15874f"} Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.687798 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8vl8" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.716671 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.725052 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8vl8"] Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.735538 5108 scope.go:117] "RemoveContainer" containerID="3b84a204056dd493507c5261ca60e0264a1a9ff8476ab36754509baeb69d95fb" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.736323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da4c12ea-9e45-4b71-9f9a-565c93d8520f" (UID: "da4c12ea-9e45-4b71-9f9a-565c93d8520f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.776907 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.777144 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c12ea-9e45-4b71-9f9a-565c93d8520f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.777209 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkqxk\" (UniqueName: \"kubernetes.io/projected/da4c12ea-9e45-4b71-9f9a-565c93d8520f-kube-api-access-hkqxk\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:04 crc kubenswrapper[5108]: I0202 00:25:04.957334 5108 scope.go:117] "RemoveContainer" containerID="aad8617a916aa584794ca1e18d38b92126d401c0258d25de6e56883166b73b19" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.033522 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.040377 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xj84"] Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.352044 5108 scope.go:117] "RemoveContainer" containerID="27b7b7465364708570b1cd87ef744a8155219cb88c0a7e8f6c5a38ca4801d2d0" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.436209 5108 scope.go:117] "RemoveContainer" containerID="abc0b81ac60fdf9242e7b8d30cb6c51ec290df312b9b70459e2737a2692347f4" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.457446 5108 scope.go:117] "RemoveContainer" containerID="229b394fd30cf7d76b0f95baebefc43c286c621c34e04a9822ceaf4d47ea4ecb" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.571649 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" path="/var/lib/kubelet/pods/b43972ad-8935-44fe-a3cb-4ae69a48b27a/volumes" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.573176 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" path="/var/lib/kubelet/pods/da4c12ea-9e45-4b71-9f9a-565c93d8520f/volumes" Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.697329 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3180ec82-70eb-4837-9eed-a92e41e5e3fc","Type":"ContainerStarted","Data":"00027b0d1ecfc071bcee298c391117d525ebc12f1a7d258d2046be39d16f353a"} Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.701031 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"60261bddc1358cb6371c6231f83867738c6f2a1c889df2042ce82b466ef763c2"} Feb 02 00:25:05 crc kubenswrapper[5108]: I0202 00:25:05.730850 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=6.624462238 podStartE2EDuration="42.730833841s" podCreationTimestamp="2026-02-02 00:24:23 +0000 UTC" firstStartedPulling="2026-02-02 00:24:26.750845963 +0000 UTC m=+866.026342893" lastFinishedPulling="2026-02-02 00:25:02.857217556 +0000 UTC m=+902.132714496" observedRunningTime="2026-02-02 00:25:05.730358238 +0000 UTC m=+905.005855188" watchObservedRunningTime="2026-02-02 00:25:05.730833841 +0000 UTC m=+905.006330761" Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.445671 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.716901 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"5c839b40765bcc1d7216fe8932863226774aa07227d24c3ecd883e030671bac5"} Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.720447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"73ff2fa5277767b23d1c00f8c9dcfb2ff38f4efd2e94c1f9000405b6bef8ab78"} Feb 02 00:25:06 crc kubenswrapper[5108]: I0202 00:25:06.726239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"7e047dd562c1a4096c3937885d7b4893c158027ce5820089513e15e2bd1936d7"} Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.580674 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581330 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581344 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581365 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581371 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-content" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581379 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581386 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581403 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581409 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581420 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581425 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581438 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581443 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="extract-utilities" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581548 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b43972ad-8935-44fe-a3cb-4ae69a48b27a" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.581560 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="da4c12ea-9e45-4b71-9f9a-565c93d8520f" containerName="registry-server" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.587351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.590309 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.590948 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.591575 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724404 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724473 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.724556 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826514 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.826886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.829359 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/69974414-b4a3-48b4-ad93-b7b855ee08ea-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.832218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/69974414-b4a3-48b4-ad93-b7b855ee08ea-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.838168 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.847769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/69974414-b4a3-48b4-ad93-b7b855ee08ea-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.881948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scx9\" (UniqueName: \"kubernetes.io/projected/69974414-b4a3-48b4-ad93-b7b855ee08ea-kube-api-access-7scx9\") pod \"default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2\" (UID: \"69974414-b4a3-48b4-ad93-b7b855ee08ea\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:07 crc kubenswrapper[5108]: I0202 00:25:07.915360 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.088852 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.099656 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.099801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.106613 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234429 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.234685 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.235155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.337952 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.338024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.339099 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a85d430-d592-4eee-99f4-89aea943a820-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.339920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a85d430-d592-4eee-99f4-89aea943a820-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.345592 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a85d430-d592-4eee-99f4-89aea943a820-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.358005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fcb\" (UniqueName: \"kubernetes.io/projected/7a85d430-d592-4eee-99f4-89aea943a820-kube-api-access-49fcb\") pod \"default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv\" (UID: \"7a85d430-d592-4eee-99f4-89aea943a820\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.408017 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2"] Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.432851 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.753986 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"a3fb1e51380c85f9d6cc72dd9e531b5eaed4c864380caf334bfa66c037ce1bd8"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.754285 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6d411794-541c-4416-bd08-cd4f26bc73cb","Type":"ContainerStarted","Data":"f34fa2ecfdc2b2904d8b3e00ca6f8ce1670030587199248f717fb7f8dc0539a5"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.757579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"9dfcdbec11103be6db6c2157a6425885febd77f0bb5b9849868fd748bf1f38b0"} Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.912509 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.037770339 podStartE2EDuration="31.91248217s" podCreationTimestamp="2026-02-02 00:24:37 +0000 UTC" firstStartedPulling="2026-02-02 00:24:52.556701627 +0000 UTC m=+891.832198557" lastFinishedPulling="2026-02-02 00:25:08.431413458 +0000 UTC m=+907.706910388" observedRunningTime="2026-02-02 00:25:08.775298888 +0000 UTC m=+908.050795818" watchObservedRunningTime="2026-02-02 00:25:08.91248217 +0000 UTC m=+908.187979100" Feb 02 00:25:08 crc kubenswrapper[5108]: I0202 00:25:08.920414 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv"] Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.445394 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.494774 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:11 crc kubenswrapper[5108]: I0202 00:25:11.816770 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 02 00:25:12 crc kubenswrapper[5108]: W0202 00:25:12.365146 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a85d430_d592_4eee_99f4_89aea943a820.slice/crio-9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89 WatchSource:0}: Error finding container 9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89: Status 404 returned error can't find the container with id 9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89 Feb 02 00:25:12 crc kubenswrapper[5108]: I0202 00:25:12.804039 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"9e7251ee47fb6e1ef3288d518188b085cc8dd420eaf16a8231d81e3f6ac81c89"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.812561 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.818628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.824818 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.828496 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd"} Feb 02 00:25:13 crc kubenswrapper[5108]: I0202 00:25:13.832601 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.874501 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"6edea8ab5eaf6fb252b78d4ed128752b4746309e765259c0adb7ff2ebd8440b6"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.877521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"186c748fd29aa9602cfdbcbc177ff1f08033051353b5259a7d1c614462eec6d1"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.880834 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"2e70b2446cf03e6a5ee77e0cf0a4dc86cdd0a17b3fa16cac2a8fa9c257064b12"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.883602 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"f5407960d70a5c2c1c3c605f67915b96b9641538a414d2d0428317578aa15cb4"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.885816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"252a1e5bac19051aa1231541315e513b166fbd6bc61dfd2554faea416e055edb"} Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.904169 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" podStartSLOduration=5.204066132 podStartE2EDuration="10.904011374s" podCreationTimestamp="2026-02-02 00:25:08 +0000 UTC" firstStartedPulling="2026-02-02 00:25:12.367966039 +0000 UTC m=+911.643462989" lastFinishedPulling="2026-02-02 00:25:18.067911301 +0000 UTC m=+917.343408231" observedRunningTime="2026-02-02 00:25:18.897044896 +0000 UTC m=+918.172541886" watchObservedRunningTime="2026-02-02 00:25:18.904011374 +0000 UTC m=+918.179508344" Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.947123 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" podStartSLOduration=2.220597061 podStartE2EDuration="11.94710287s" podCreationTimestamp="2026-02-02 00:25:07 +0000 UTC" firstStartedPulling="2026-02-02 00:25:08.425877269 +0000 UTC m=+907.701374199" lastFinishedPulling="2026-02-02 00:25:18.152383068 +0000 UTC m=+917.427880008" observedRunningTime="2026-02-02 00:25:18.932342884 +0000 UTC m=+918.207839884" watchObservedRunningTime="2026-02-02 00:25:18.94710287 +0000 UTC m=+918.222599800" Feb 02 00:25:18 crc kubenswrapper[5108]: I0202 00:25:18.951287 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" podStartSLOduration=8.806619759 podStartE2EDuration="27.951269951s" podCreationTimestamp="2026-02-02 00:24:51 +0000 UTC" firstStartedPulling="2026-02-02 00:24:58.950285175 +0000 UTC m=+898.225782105" lastFinishedPulling="2026-02-02 00:25:18.094935367 +0000 UTC m=+917.370432297" observedRunningTime="2026-02-02 00:25:18.948116317 +0000 UTC m=+918.223613247" watchObservedRunningTime="2026-02-02 00:25:18.951269951 +0000 UTC m=+918.226766891" Feb 02 00:25:19 crc kubenswrapper[5108]: I0202 00:25:19.005777 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" podStartSLOduration=4.62591278 podStartE2EDuration="20.005760375s" podCreationTimestamp="2026-02-02 00:24:59 +0000 UTC" firstStartedPulling="2026-02-02 00:25:02.667084184 +0000 UTC m=+901.942581114" lastFinishedPulling="2026-02-02 00:25:18.046931779 +0000 UTC m=+917.322428709" observedRunningTime="2026-02-02 00:25:18.971744932 +0000 UTC m=+918.247241932" watchObservedRunningTime="2026-02-02 00:25:19.005760375 +0000 UTC m=+918.281257305" Feb 02 00:25:19 crc kubenswrapper[5108]: I0202 00:25:19.010874 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" podStartSLOduration=5.844297165 podStartE2EDuration="25.010865282s" podCreationTimestamp="2026-02-02 00:24:54 +0000 UTC" firstStartedPulling="2026-02-02 00:24:58.955312703 +0000 UTC m=+898.230809633" lastFinishedPulling="2026-02-02 00:25:18.12188082 +0000 UTC m=+917.397377750" observedRunningTime="2026-02-02 00:25:19.003807292 +0000 UTC m=+918.279304222" watchObservedRunningTime="2026-02-02 00:25:19.010865282 +0000 UTC m=+918.286362212" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.332891 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.333174 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" containerID="cri-o://6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" gracePeriod=30 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.724327 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752129 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752828 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.752847 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.753006 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerName="default-interconnect" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.757947 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.772331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.835845 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.835939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836038 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836122 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836158 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836183 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") pod \"22703395-ebd0-469b-aec4-b703ed4a8e65\" (UID: \"22703395-ebd0-469b-aec4-b703ed4a8e65\") " Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836337 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836377 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836443 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.836597 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.837363 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.842068 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl" (OuterVolumeSpecName: "kube-api-access-b88kl") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "kube-api-access-b88kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.844395 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.844466 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845007 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845089 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.845656 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "22703395-ebd0-469b-aec4-b703ed4a8e65" (UID: "22703395-ebd0-469b-aec4-b703ed4a8e65"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901433 5108 generic.go:358] "Generic (PLEG): container finished" podID="22703395-ebd0-469b-aec4-b703ed4a8e65" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" exitCode=0 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901485 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901526 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerDied","Data":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xsgkr" event={"ID":"22703395-ebd0-469b-aec4-b703ed4a8e65","Type":"ContainerDied","Data":"be460dd189cbfc5a2a37f3ba1e3bf4c61862c2876dd659904fe0292f2bbf5517"} Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.901600 5108 scope.go:117] "RemoveContainer" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922348 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922421 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.922469 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.923001 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.923052 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" gracePeriod=600 Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.926573 5108 scope.go:117] "RemoveContainer" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: E0202 00:25:20.928876 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": container with ID starting with 6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1 not found: ID does not exist" containerID="6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.928921 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1"} err="failed to get container status \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": rpc error: code = NotFound desc = could not find container \"6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1\": container with ID starting with 6ae4b75dc865dfdeee25019f1c5ea8673d91711bbd96aa4e1555060e8f2af4e1 not found: ID does not exist" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.937747 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938616 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938655 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938691 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938711 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938783 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938794 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938803 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b88kl\" (UniqueName: \"kubernetes.io/projected/22703395-ebd0-469b-aec4-b703ed4a8e65-kube-api-access-b88kl\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938812 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938821 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938830 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.938839 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/22703395-ebd0-469b-aec4-b703ed4a8e65-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.942216 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xsgkr"] Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.944186 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-users\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.944361 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-sasl-config\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.946142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.947854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.948188 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.948271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:20 crc kubenswrapper[5108]: I0202 00:25:20.966522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmm5\" (UniqueName: \"kubernetes.io/projected/b6c4ad43-6e88-4492-ac18-0889f4f1fcdd-kube-api-access-9tmm5\") pod \"default-interconnect-55bf8d5cb-7pdq9\" (UID: \"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd\") " pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.075749 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.569414 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22703395-ebd0-469b-aec4-b703ed4a8e65" path="/var/lib/kubelet/pods/22703395-ebd0-469b-aec4-b703ed4a8e65/volumes" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.570696 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-7pdq9"] Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914028 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914141 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.914430 5108 scope.go:117] "RemoveContainer" containerID="2f2e9df533cb87396f8d3fd0d1a26fadb3bf2cae351b8b03ee4f3bd210e16a31" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919069 5108 generic.go:358] "Generic (PLEG): container finished" podID="7a85d430-d592-4eee-99f4-89aea943a820" containerID="5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerDied","Data":"5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.919944 5108 scope.go:117] "RemoveContainer" containerID="5b966889339391ad5d1c58ffdd96cca6c66b2241f74216278cd1c8d7a429186f" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.923714 5108 generic.go:358] "Generic (PLEG): container finished" podID="9fccb2ea-b40e-4375-81bf-1bedc36fd526" containerID="50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.923806 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerDied","Data":"50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.924305 5108 scope.go:117] "RemoveContainer" containerID="50f66ae7b1198518f36c4f7c0b2ac204ea13d743efd2a28463532e9ab85cdc6b" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938265 5108 generic.go:358] "Generic (PLEG): container finished" podID="effd2c87-a358-47ac-869d-e9b26a40cb11" containerID="e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938344 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerDied","Data":"e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.938929 5108 scope.go:117] "RemoveContainer" containerID="e4ed96cabaa8a92966a36fb9578a8f60e5e271a49ba4cf3ce82a49924816b94d" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.944658 5108 generic.go:358] "Generic (PLEG): container finished" podID="095466f0-3dfb-4daf-809c-188de8da2ee9" containerID="796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.944813 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerDied","Data":"796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.945662 5108 scope.go:117] "RemoveContainer" containerID="796732d5918e79323e00815611ebf68a7c6940165d8970726370476dcd69dadd" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.955062 5108 generic.go:358] "Generic (PLEG): container finished" podID="69974414-b4a3-48b4-ad93-b7b855ee08ea" containerID="219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7" exitCode=0 Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.955186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerDied","Data":"219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.956913 5108 scope.go:117] "RemoveContainer" containerID="219d6aaa4b6711ff4073da8170cdf099f0a8e4eb465af71ad32107c2ea1fb1b7" Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.973555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" event={"ID":"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd","Type":"ContainerStarted","Data":"2fbf8a71649f3a98c7d31891c2d8a95b4ce92e749aed43e550a9af1c05e5939b"} Feb 02 00:25:21 crc kubenswrapper[5108]: I0202 00:25:21.973727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" event={"ID":"b6c4ad43-6e88-4492-ac18-0889f4f1fcdd","Type":"ContainerStarted","Data":"569e8f5708b393ee6a89cd7a77a57b8b62f08e7fb9b85bfe4aeb1881d6f9de98"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.095531 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-7pdq9" podStartSLOduration=2.095503367 podStartE2EDuration="2.095503367s" podCreationTimestamp="2026-02-02 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:25:22.063189869 +0000 UTC m=+921.338686799" watchObservedRunningTime="2026-02-02 00:25:22.095503367 +0000 UTC m=+921.371000297" Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.984765 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv" event={"ID":"7a85d430-d592-4eee-99f4-89aea943a820","Type":"ContainerStarted","Data":"88dae75f23c970f099452f64517336194a281bee143e83c5871bc2c78ce44fd9"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.993640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k" event={"ID":"9fccb2ea-b40e-4375-81bf-1bedc36fd526","Type":"ContainerStarted","Data":"0a19fd50e5b7e00a237f48e62454db3189305877c11a32a4ee888dcbc479a9d0"} Feb 02 00:25:22 crc kubenswrapper[5108]: I0202 00:25:22.997666 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-2fppp" event={"ID":"effd2c87-a358-47ac-869d-e9b26a40cb11","Type":"ContainerStarted","Data":"144835ad071ad274c64988b578090a7870a57722a90c5a304eb6499ffa673778"} Feb 02 00:25:23 crc kubenswrapper[5108]: I0202 00:25:23.002042 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4" event={"ID":"095466f0-3dfb-4daf-809c-188de8da2ee9","Type":"ContainerStarted","Data":"b132791c697d0ba43a50dc4d3ea5279d0863a395830c3224c7af935ff6799a4f"} Feb 02 00:25:23 crc kubenswrapper[5108]: I0202 00:25:23.005313 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2" event={"ID":"69974414-b4a3-48b4-ad93-b7b855ee08ea","Type":"ContainerStarted","Data":"3474e2473ec982f6042242fdb3e83622e3c71c770c4f847446b5b9779b2f737f"} Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.422989 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.555970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.556172 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.559350 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.559607 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.582887 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.583218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.583384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685083 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685151 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.685168 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.686064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/318a1230-b836-4db1-b9b7-8da7017365ad-qdr-test-config\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.692969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/318a1230-b836-4db1-b9b7-8da7017365ad-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.702756 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgc7g\" (UniqueName: \"kubernetes.io/projected/318a1230-b836-4db1-b9b7-8da7017365ad-kube-api-access-jgc7g\") pod \"qdr-test\" (UID: \"318a1230-b836-4db1-b9b7-8da7017365ad\") " pod="service-telemetry/qdr-test" Feb 02 00:25:29 crc kubenswrapper[5108]: I0202 00:25:29.880384 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 02 00:25:30 crc kubenswrapper[5108]: I0202 00:25:30.341584 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 02 00:25:31 crc kubenswrapper[5108]: I0202 00:25:31.075487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"318a1230-b836-4db1-b9b7-8da7017365ad","Type":"ContainerStarted","Data":"10bf22125dfa147cc049cdd45a10169258bae1cc8e6679ce5417fc58f2de2a9d"} Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.143175 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"318a1230-b836-4db1-b9b7-8da7017365ad","Type":"ContainerStarted","Data":"eb5424d4b0be8225d9294450e8fd43a85057550b1eb85fa6e49dc4cd5a7fde77"} Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.162605 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.00773301 podStartE2EDuration="10.162588184s" podCreationTimestamp="2026-02-02 00:25:29 +0000 UTC" firstStartedPulling="2026-02-02 00:25:30.348315861 +0000 UTC m=+929.623812791" lastFinishedPulling="2026-02-02 00:25:38.503171045 +0000 UTC m=+937.778667965" observedRunningTime="2026-02-02 00:25:39.157735354 +0000 UTC m=+938.433232314" watchObservedRunningTime="2026-02-02 00:25:39.162588184 +0000 UTC m=+938.438085114" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.497872 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.502982 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.504923 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505262 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.505449 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.506537 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.506585 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.508800 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.648963 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649008 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649147 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649320 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649396 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.649483 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750869 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.750942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751257 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.751345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752274 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752559 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752874 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.752948 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.775356 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"stf-smoketest-smoke1-8jkkf\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.821908 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.935343 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.951048 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:39 crc kubenswrapper[5108]: I0202 00:25:39.969636 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.057221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.158397 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.177769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"curl\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.267012 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8jkkf"] Feb 02 00:25:40 crc kubenswrapper[5108]: W0202 00:25:40.278590 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod528509a5_e39b_4132_a319_38a57ed61f15.slice/crio-e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6 WatchSource:0}: Error finding container e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6: Status 404 returned error can't find the container with id e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6 Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.279793 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:40 crc kubenswrapper[5108]: I0202 00:25:40.528371 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 02 00:25:40 crc kubenswrapper[5108]: W0202 00:25:40.534691 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6916909_03ba_493b_9e93_11005e24910d.slice/crio-54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981 WatchSource:0}: Error finding container 54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981: Status 404 returned error can't find the container with id 54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981 Feb 02 00:25:41 crc kubenswrapper[5108]: I0202 00:25:41.157327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerStarted","Data":"54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981"} Feb 02 00:25:41 crc kubenswrapper[5108]: I0202 00:25:41.158816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6"} Feb 02 00:25:42 crc kubenswrapper[5108]: I0202 00:25:42.168906 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerStarted","Data":"1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1"} Feb 02 00:25:42 crc kubenswrapper[5108]: I0202 00:25:42.185374 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.7912723910000001 podStartE2EDuration="3.185352739s" podCreationTimestamp="2026-02-02 00:25:39 +0000 UTC" firstStartedPulling="2026-02-02 00:25:40.537058317 +0000 UTC m=+939.812555247" lastFinishedPulling="2026-02-02 00:25:41.931138665 +0000 UTC m=+941.206635595" observedRunningTime="2026-02-02 00:25:42.179181163 +0000 UTC m=+941.454678093" watchObservedRunningTime="2026-02-02 00:25:42.185352739 +0000 UTC m=+941.460849669" Feb 02 00:25:43 crc kubenswrapper[5108]: I0202 00:25:43.178251 5108 generic.go:358] "Generic (PLEG): container finished" podID="e6916909-03ba-493b-9e93-11005e24910d" containerID="1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1" exitCode=0 Feb 02 00:25:43 crc kubenswrapper[5108]: I0202 00:25:43.178461 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerDied","Data":"1300001f353f4683812d97f9e858e85354c55cd2a9c3211149a64decf02392f1"} Feb 02 00:25:47 crc kubenswrapper[5108]: I0202 00:25:47.918549 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.080471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") pod \"e6916909-03ba-493b-9e93-11005e24910d\" (UID: \"e6916909-03ba-493b-9e93-11005e24910d\") " Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.103733 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8" (OuterVolumeSpecName: "kube-api-access-c8pv8") pod "e6916909-03ba-493b-9e93-11005e24910d" (UID: "e6916909-03ba-493b-9e93-11005e24910d"). InnerVolumeSpecName "kube-api-access-c8pv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.160811 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_e6916909-03ba-493b-9e93-11005e24910d/curl/0.log" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.184368 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8pv8\" (UniqueName: \"kubernetes.io/projected/e6916909-03ba-493b-9e93-11005e24910d-kube-api-access-c8pv8\") on node \"crc\" DevicePath \"\"" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231379 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231400 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e6916909-03ba-493b-9e93-11005e24910d","Type":"ContainerDied","Data":"54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981"} Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.231431 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e1cd21e1233afc2853cb73bd4d421babb8643b3e8bf5827cf38baaf6eb5981" Feb 02 00:25:48 crc kubenswrapper[5108]: I0202 00:25:48.513042 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:25:50 crc kubenswrapper[5108]: I0202 00:25:50.252882 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b"} Feb 02 00:25:52 crc kubenswrapper[5108]: I0202 00:25:52.843782 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 00:25:58 crc kubenswrapper[5108]: I0202 00:25:58.322642 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerStarted","Data":"50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1"} Feb 02 00:25:58 crc kubenswrapper[5108]: I0202 00:25:58.348527 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" podStartSLOduration=2.384040331 podStartE2EDuration="19.348507445s" podCreationTimestamp="2026-02-02 00:25:39 +0000 UTC" firstStartedPulling="2026-02-02 00:25:40.282365601 +0000 UTC m=+939.557862571" lastFinishedPulling="2026-02-02 00:25:57.246832755 +0000 UTC m=+956.522329685" observedRunningTime="2026-02-02 00:25:58.342199536 +0000 UTC m=+957.617696476" watchObservedRunningTime="2026-02-02 00:25:58.348507445 +0000 UTC m=+957.624004375" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.151299 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152290 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152306 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.152472 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6916909-03ba-493b-9e93-11005e24910d" containerName="curl" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.157289 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.158668 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.163601 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.167494 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.172858 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.291014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.393186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.414375 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"auto-csr-approver-29499866-p4952\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.483529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:00 crc kubenswrapper[5108]: W0202 00:26:00.820751 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e42247_cef9_4651_977b_c8bf4f2a1265.slice/crio-eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4 WatchSource:0}: Error finding container eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4: Status 404 returned error can't find the container with id eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4 Feb 02 00:26:00 crc kubenswrapper[5108]: I0202 00:26:00.821474 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:26:01 crc kubenswrapper[5108]: I0202 00:26:01.360494 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerStarted","Data":"eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4"} Feb 02 00:26:02 crc kubenswrapper[5108]: I0202 00:26:02.371069 5108 generic.go:358] "Generic (PLEG): container finished" podID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerID="3d3f5106d313264d2e3037712f690e0c2856500894ef7b3799e7297fe1f37cee" exitCode=0 Feb 02 00:26:02 crc kubenswrapper[5108]: I0202 00:26:02.371149 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerDied","Data":"3d3f5106d313264d2e3037712f690e0c2856500894ef7b3799e7297fe1f37cee"} Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.639666 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.774861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") pod \"11e42247-cef9-4651-977b-c8bf4f2a1265\" (UID: \"11e42247-cef9-4651-977b-c8bf4f2a1265\") " Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.787153 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg" (OuterVolumeSpecName: "kube-api-access-mtgkg") pod "11e42247-cef9-4651-977b-c8bf4f2a1265" (UID: "11e42247-cef9-4651-977b-c8bf4f2a1265"). InnerVolumeSpecName "kube-api-access-mtgkg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:26:03 crc kubenswrapper[5108]: I0202 00:26:03.876996 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mtgkg\" (UniqueName: \"kubernetes.io/projected/11e42247-cef9-4651-977b-c8bf4f2a1265-kube-api-access-mtgkg\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388453 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499866-p4952" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499866-p4952" event={"ID":"11e42247-cef9-4651-977b-c8bf4f2a1265","Type":"ContainerDied","Data":"eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4"} Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.388642 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec72c71e30239d653e405a72d45be16a9c28843c0eed384970fcb45a96ee9f4" Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.707704 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:26:04 crc kubenswrapper[5108]: I0202 00:26:04.715081 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499860-n8hbz"] Feb 02 00:26:05 crc kubenswrapper[5108]: I0202 00:26:05.567691 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c738be-c891-4aa6-adfd-c1234cf80512" path="/var/lib/kubelet/pods/c1c738be-c891-4aa6-adfd-c1234cf80512/volumes" Feb 02 00:26:18 crc kubenswrapper[5108]: I0202 00:26:18.679196 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.565253 5108 generic.go:358] "Generic (PLEG): container finished" podID="528509a5-e39b-4132-a319-38a57ed61f15" containerID="894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b" exitCode=0 Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.567084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b"} Feb 02 00:26:23 crc kubenswrapper[5108]: I0202 00:26:23.567790 5108 scope.go:117] "RemoveContainer" containerID="894d4ba8a98d5d308b99513532b715504504ac25eee87c95ae71fa381ad4357b" Feb 02 00:26:29 crc kubenswrapper[5108]: I0202 00:26:29.610260 5108 generic.go:358] "Generic (PLEG): container finished" podID="528509a5-e39b-4132-a319-38a57ed61f15" containerID="50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1" exitCode=0 Feb 02 00:26:29 crc kubenswrapper[5108]: I0202 00:26:29.610494 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"50d4a3199232e544a839082ea10f9fe20981e2285b6195620995522c629c7ff1"} Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.891824 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935391 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935609 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935629 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.935676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.952385 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg" (OuterVolumeSpecName: "kube-api-access-7n7cg") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "kube-api-access-7n7cg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.958652 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.960637 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.961497 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.962770 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: I0202 00:26:30.966350 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:30 crc kubenswrapper[5108]: E0202 00:26:30.969174 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script podName:528509a5-e39b-4132-a319-38a57ed61f15 nodeName:}" failed. No retries permitted until 2026-02-02 00:26:31.469142755 +0000 UTC m=+990.744639685 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "collectd-entrypoint-script" (UniqueName: "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15") : error deleting /var/lib/kubelet/pods/528509a5-e39b-4132-a319-38a57ed61f15/volume-subpaths: remove /var/lib/kubelet/pods/528509a5-e39b-4132-a319-38a57ed61f15/volume-subpaths: no such file or directory Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037747 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7n7cg\" (UniqueName: \"kubernetes.io/projected/528509a5-e39b-4132-a319-38a57ed61f15-kube-api-access-7n7cg\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037781 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037790 5108 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037798 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037806 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.037818 5108 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.545931 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") pod \"528509a5-e39b-4132-a319-38a57ed61f15\" (UID: \"528509a5-e39b-4132-a319-38a57ed61f15\") " Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.546680 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "528509a5-e39b-4132-a319-38a57ed61f15" (UID: "528509a5-e39b-4132-a319-38a57ed61f15"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.630806 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.631007 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8jkkf" event={"ID":"528509a5-e39b-4132-a319-38a57ed61f15","Type":"ContainerDied","Data":"e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6"} Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.631069 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a3994b962d5310b37ba80408bb63000b7a989568f1601bac6e3e8d1c1d46e6" Feb 02 00:26:31 crc kubenswrapper[5108]: I0202 00:26:31.653047 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/528509a5-e39b-4132-a319-38a57ed61f15-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.116037 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8jkkf_528509a5-e39b-4132-a319-38a57ed61f15/smoketest-collectd/0.log" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.451852 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8jkkf_528509a5-e39b-4132-a319-38a57ed61f15/smoketest-ceilometer/0.log" Feb 02 00:26:33 crc kubenswrapper[5108]: I0202 00:26:33.793421 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-7pdq9_b6c4ad43-6e88-4492-ac18-0889f4f1fcdd/default-interconnect/0.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.120350 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-2fppp_effd2c87-a358-47ac-869d-e9b26a40cb11/bridge/1.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.489582 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-2fppp_effd2c87-a358-47ac-869d-e9b26a40cb11/sg-core/0.log" Feb 02 00:26:34 crc kubenswrapper[5108]: I0202 00:26:34.817294 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2_69974414-b4a3-48b4-ad93-b7b855ee08ea/bridge/1.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.153109 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-77fb49b9bb-2fhw2_69974414-b4a3-48b4-ad93-b7b855ee08ea/sg-core/0.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.477703 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k_9fccb2ea-b40e-4375-81bf-1bedc36fd526/bridge/1.log" Feb 02 00:26:35 crc kubenswrapper[5108]: I0202 00:26:35.829645 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vc85k_9fccb2ea-b40e-4375-81bf-1bedc36fd526/sg-core/0.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.093406 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv_7a85d430-d592-4eee-99f4-89aea943a820/bridge/1.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.413447 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-76c4db79bc-4srpv_7a85d430-d592-4eee-99f4-89aea943a820/sg-core/0.log" Feb 02 00:26:36 crc kubenswrapper[5108]: I0202 00:26:36.729646 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4_095466f0-3dfb-4daf-809c-188de8da2ee9/bridge/1.log" Feb 02 00:26:37 crc kubenswrapper[5108]: I0202 00:26:37.067717 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-68mz4_095466f0-3dfb-4daf-809c-188de8da2ee9/sg-core/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.184881 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-5f7rf_02251320-d565-4211-98ff-a138f7924888/operator/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.499420 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3180ec82-70eb-4837-9eed-a92e41e5e3fc/prometheus/0.log" Feb 02 00:26:40 crc kubenswrapper[5108]: I0202 00:26:40.916134 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_91781fe7-72ca-4748-8dcd-5d7d1c275472/elasticsearch/0.log" Feb 02 00:26:41 crc kubenswrapper[5108]: I0202 00:26:41.298575 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-sfrh8_4431ddda-6bd1-43de-8d6e-c5829580e15e/prometheus-webhook-snmp/0.log" Feb 02 00:26:41 crc kubenswrapper[5108]: I0202 00:26:41.650534 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6d411794-541c-4416-bd08-cd4f26bc73cb/alertmanager/0.log" Feb 02 00:26:55 crc kubenswrapper[5108]: I0202 00:26:55.278702 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-6gtwj_1c4a2dde-667e-45e3-8d53-9219bcfd2214/operator/0.log" Feb 02 00:26:59 crc kubenswrapper[5108]: I0202 00:26:59.043912 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-5f7rf_02251320-d565-4211-98ff-a138f7924888/operator/0.log" Feb 02 00:26:59 crc kubenswrapper[5108]: I0202 00:26:59.338223 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_318a1230-b836-4db1-b9b7-8da7017365ad/qdr/0.log" Feb 02 00:27:02 crc kubenswrapper[5108]: I0202 00:27:02.593929 5108 scope.go:117] "RemoveContainer" containerID="4889d1b8838ddcd25d685c454fac6b652c42c5979336992c7b26bb11fe672dbf" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.086296 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088220 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088278 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088351 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088363 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088406 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088421 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088640 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-ceilometer" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088672 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="528509a5-e39b-4132-a319-38a57ed61f15" containerName="smoketest-collectd" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.088691 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" containerName="oc" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.102363 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.109163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-gfw45\"/\"default-dockercfg-6rcjt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.109434 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gfw45\"/\"openshift-service-ca.crt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.120723 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.122180 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gfw45\"/\"kube-root-ca.crt\"" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.226319 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.226399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328306 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328770 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.328839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.364657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"must-gather-74b7l\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.434069 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:27:24 crc kubenswrapper[5108]: I0202 00:27:24.892087 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:27:25 crc kubenswrapper[5108]: I0202 00:27:25.767613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"529a646df2f76a424219a9f5dc5ba8e321abac67ba88d1a3934022bfa5dc763c"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.818541 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.819286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerStarted","Data":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} Feb 02 00:27:31 crc kubenswrapper[5108]: I0202 00:27:31.851047 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gfw45/must-gather-74b7l" podStartSLOduration=1.977410702 podStartE2EDuration="7.851014857s" podCreationTimestamp="2026-02-02 00:27:24 +0000 UTC" firstStartedPulling="2026-02-02 00:27:24.899976981 +0000 UTC m=+1044.175473941" lastFinishedPulling="2026-02-02 00:27:30.773581166 +0000 UTC m=+1050.049078096" observedRunningTime="2026-02-02 00:27:31.835618516 +0000 UTC m=+1051.111115506" watchObservedRunningTime="2026-02-02 00:27:31.851014857 +0000 UTC m=+1051.126511827" Feb 02 00:27:50 crc kubenswrapper[5108]: I0202 00:27:50.919217 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:27:50 crc kubenswrapper[5108]: I0202 00:27:50.919907 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.137516 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.150200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.150361 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.163838 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.164183 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.164569 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.169001 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.269966 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.289323 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"auto-csr-approver-29499868-69fht\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.477407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:00 crc kubenswrapper[5108]: I0202 00:28:00.725663 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499868-69fht"] Feb 02 00:28:01 crc kubenswrapper[5108]: I0202 00:28:01.052070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerStarted","Data":"60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4"} Feb 02 00:28:02 crc kubenswrapper[5108]: I0202 00:28:02.057936 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerStarted","Data":"57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc"} Feb 02 00:28:02 crc kubenswrapper[5108]: I0202 00:28:02.075464 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29499868-69fht" podStartSLOduration=1.195990117 podStartE2EDuration="2.075449221s" podCreationTimestamp="2026-02-02 00:28:00 +0000 UTC" firstStartedPulling="2026-02-02 00:28:00.720769177 +0000 UTC m=+1079.996266107" lastFinishedPulling="2026-02-02 00:28:01.600228261 +0000 UTC m=+1080.875725211" observedRunningTime="2026-02-02 00:28:02.069853981 +0000 UTC m=+1081.345350911" watchObservedRunningTime="2026-02-02 00:28:02.075449221 +0000 UTC m=+1081.350946151" Feb 02 00:28:03 crc kubenswrapper[5108]: I0202 00:28:03.067220 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerID="57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc" exitCode=0 Feb 02 00:28:03 crc kubenswrapper[5108]: I0202 00:28:03.067379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerDied","Data":"57c873d43a8f95232b4d7911ca04e3bf56d61d09b31b1c7e45b22c63e97b03bc"} Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.354833 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.441967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") pod \"3a90f09a-fe0d-4118-b232-41084b3e197e\" (UID: \"3a90f09a-fe0d-4118-b232-41084b3e197e\") " Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.448414 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt" (OuterVolumeSpecName: "kube-api-access-8gzlt") pod "3a90f09a-fe0d-4118-b232-41084b3e197e" (UID: "3a90f09a-fe0d-4118-b232-41084b3e197e"). InnerVolumeSpecName "kube-api-access-8gzlt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.544519 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gzlt\" (UniqueName: \"kubernetes.io/projected/3a90f09a-fe0d-4118-b232-41084b3e197e-kube-api-access-8gzlt\") on node \"crc\" DevicePath \"\"" Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.639828 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:28:04 crc kubenswrapper[5108]: I0202 00:28:04.644705 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499862-nmjl8"] Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.085864 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499868-69fht" Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.085887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499868-69fht" event={"ID":"3a90f09a-fe0d-4118-b232-41084b3e197e","Type":"ContainerDied","Data":"60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4"} Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.086271 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60475a5b44c4ea031badde77088258caa3d7d57e4f01df1f8639d96f27b575b4" Feb 02 00:28:05 crc kubenswrapper[5108]: I0202 00:28:05.566026 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35e90a5-9be9-4d25-a87f-80c879fadbdb" path="/var/lib/kubelet/pods/e35e90a5-9be9-4d25-a87f-80c879fadbdb/volumes" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.068556 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-qmhlw_00c9b96f-70c1-47b2-ab2f-570c9911ecaf/control-plane-machine-set-operator/0.log" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.177486 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-q88tw_688cb527-1d6f-4e22-9b14-4718201c8343/kube-rbac-proxy/0.log" Feb 02 00:28:16 crc kubenswrapper[5108]: I0202 00:28:16.249093 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-q88tw_688cb527-1d6f-4e22-9b14-4718201c8343/machine-api-operator/0.log" Feb 02 00:28:20 crc kubenswrapper[5108]: I0202 00:28:20.919315 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:28:20 crc kubenswrapper[5108]: I0202 00:28:20.919902 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.158855 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-z8j4s_f0e17311-6020-462f-9ab7-8db9a5b4fd53/cert-manager-controller/0.log" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.260611 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-gwlkp_9c526e59-9f54-4c07-9df7-9c254286c8b2/cert-manager-cainjector/0.log" Feb 02 00:28:29 crc kubenswrapper[5108]: I0202 00:28:29.363128 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-md5xl_36067e0f-9235-409f-83d9-125165d03451/cert-manager-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.094147 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qx2r6_3cae4b55-dd8b-41da-85fd-e3a48cd48a84/prometheus-operator/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.242303 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld_7b30b62b-4640-4186-8cec-9a4bce652c54/prometheus-operator-admission-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.257624 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8_ea610d63-cdca-43f6-ae36-1021a5cfb158/prometheus-operator-admission-webhook/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.425349 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-tdjm6_6b7e0bd1-72e0-4772-a2cf-8287051d3acd/operator/0.log" Feb 02 00:28:44 crc kubenswrapper[5108]: I0202 00:28:44.464312 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-twmfp_600911fd-7824-48ed-a826-60768dce689a/perses-operator/0.log" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.919727 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.920474 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.920564 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.921767 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:28:50 crc kubenswrapper[5108]: I0202 00:28:50.921899 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87" gracePeriod=600 Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.425987 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87" exitCode=0 Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426066 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"} Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426421 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} Feb 02 00:28:51 crc kubenswrapper[5108]: I0202 00:28:51.426441 5108 scope.go:117] "RemoveContainer" containerID="795679bf9de717c5d31e446059babc25599991e8481de54f0dc1309c13af937d" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.009616 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.240310 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.241291 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.241866 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.404128 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.445896 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.482385 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fs5km9_09f8289b-76c1-4e9d-9878-88f41e0289df/extract/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.597102 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.750454 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.776823 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.787742 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.944071 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/util/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.967560 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/extract/0.log" Feb 02 00:28:59 crc kubenswrapper[5108]: I0202 00:28:59.978360 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej7k95_2a27ac25-eac0-4877-a439-99fd1b7ea671/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.109671 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.246727 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.306789 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.307145 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.453797 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.462519 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.490888 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jt2jk_7fedf68a-9fd7-4344-b2d4-7856f539c455/extract/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.632057 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.787853 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.793116 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.826168 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:00 crc kubenswrapper[5108]: I0202 00:29:00.988697 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/util/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.019950 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/extract/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.021168 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tsdjb_3b577ebd-ea5b-4c70-b43d-826f4ea87884/pull/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.159868 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.310393 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.326144 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.342272 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.477971 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.478107 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.645000 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66j84_32fc8227-87b8-4b48-9efa-da7031ec6c27/registry-server/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.661679 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.829459 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.833064 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.843960 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:01 crc kubenswrapper[5108]: I0202 00:29:01.980384 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.004155 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.046883 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-t6j5g_e18aabab-6cfe-4b88-9efd-a44ecbcace87/marketplace-operator/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.199113 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.367407 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rttj6_47cf2dc5-b96a-4ed9-acfe-435ef357e479/registry-server/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.448844 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.455890 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.462874 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.637759 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-content/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.666120 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/extract-utilities/0.log" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.738927 5108 scope.go:117] "RemoveContainer" containerID="ac142680678000a1c22ed75ac938d78969d68b4d54d50e573d123eec7fdc4975" Feb 02 00:29:02 crc kubenswrapper[5108]: I0202 00:29:02.848948 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jwrx9_07e00e0c-ae6b-40eb-b439-06e770ecfc2a/registry-server/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.771754 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-jj8ld_7b30b62b-4640-4186-8cec-9a4bce652c54/prometheus-operator-admission-webhook/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.800890 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qx2r6_3cae4b55-dd8b-41da-85fd-e3a48cd48a84/prometheus-operator/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.808687 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6475ff4679-rj6t8_ea610d63-cdca-43f6-ae36-1021a5cfb158/prometheus-operator-admission-webhook/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.873163 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-tdjm6_6b7e0bd1-72e0-4772-a2cf-8287051d3acd/operator/0.log" Feb 02 00:29:15 crc kubenswrapper[5108]: I0202 00:29:15.943883 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-twmfp_600911fd-7824-48ed-a826-60768dce689a/perses-operator/0.log" Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.005406 5108 generic.go:358] "Generic (PLEG): container finished" podID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" exitCode=0 Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.005514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfw45/must-gather-74b7l" event={"ID":"cec16d3f-7f30-4430-8908-77ebaf0a9f23","Type":"ContainerDied","Data":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.006821 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:29:56 crc kubenswrapper[5108]: I0202 00:29:56.233615 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/gather/0.log" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.178390 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180152 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180178 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.180514 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a90f09a-fe0d-4118-b232-41084b3e197e" containerName="oc" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.186057 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.189556 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.190010 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.190398 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.191728 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.200483 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.200703 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.202784 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.208645 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.218630 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349498 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.349634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451757 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.451778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.453223 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.465918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.471842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"collect-profiles-29499870-qts6z\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.473972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"auto-csr-approver-29499870-ctgvw\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.533707 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.543192 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.784070 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499870-ctgvw"] Feb 02 00:30:00 crc kubenswrapper[5108]: I0202 00:30:00.840922 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z"] Feb 02 00:30:00 crc kubenswrapper[5108]: W0202 00:30:00.844264 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3b7760_ff06_45a3_9609_e0ff773cc0f9.slice/crio-87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12 WatchSource:0}: Error finding container 87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12: Status 404 returned error can't find the container with id 87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12 Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.048524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerStarted","Data":"0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.048866 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerStarted","Data":"87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.049567 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerStarted","Data":"989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528"} Feb 02 00:30:01 crc kubenswrapper[5108]: I0202 00:30:01.595103 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" podStartSLOduration=1.5950842280000002 podStartE2EDuration="1.595084228s" podCreationTimestamp="2026-02-02 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 00:30:01.065420175 +0000 UTC m=+1200.340917125" watchObservedRunningTime="2026-02-02 00:30:01.595084228 +0000 UTC m=+1200.870581168" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.063383 5108 generic.go:358] "Generic (PLEG): container finished" podID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerID="0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5" exitCode=0 Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.063662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerDied","Data":"0a5c3b29e3c5c29bb4783455b6db7b9f3d466624deee2b1a022cc0618ce7d5e5"} Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.524581 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.525129 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-gfw45/must-gather-74b7l" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" containerID="cri-o://3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" gracePeriod=2 Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.527343 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.533119 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gfw45/must-gather-74b7l"] Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.544539 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.563603 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q22wv_24f8cedc-9b82-4ef7-a7db-4ce57803e0ce/kube-multus/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.565194 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.570576 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.976996 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/copy/0.log" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.977937 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:30:02 crc kubenswrapper[5108]: I0202 00:30:02.979498 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.002747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") pod \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.002871 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") pod \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\" (UID: \"cec16d3f-7f30-4430-8908-77ebaf0a9f23\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.011752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj" (OuterVolumeSpecName: "kube-api-access-9bgxj") pod "cec16d3f-7f30-4430-8908-77ebaf0a9f23" (UID: "cec16d3f-7f30-4430-8908-77ebaf0a9f23"). InnerVolumeSpecName "kube-api-access-9bgxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.078755 5108 generic.go:358] "Generic (PLEG): container finished" podID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerID="0f5d023d74c13fc2161662e458cd8e9221f4acccd2576cc07870a375b10daf4b" exitCode=0 Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.079069 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerDied","Data":"0f5d023d74c13fc2161662e458cd8e9221f4acccd2576cc07870a375b10daf4b"} Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.080536 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gfw45_must-gather-74b7l_cec16d3f-7f30-4430-8908-77ebaf0a9f23/copy/0.log" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.080913 5108 generic.go:358] "Generic (PLEG): container finished" podID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" exitCode=143 Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.081084 5108 scope.go:117] "RemoveContainer" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.082311 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cec16d3f-7f30-4430-8908-77ebaf0a9f23" (UID: "cec16d3f-7f30-4430-8908-77ebaf0a9f23"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.082972 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfw45/must-gather-74b7l" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.095996 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.097595 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.104713 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bgxj\" (UniqueName: \"kubernetes.io/projected/cec16d3f-7f30-4430-8908-77ebaf0a9f23-kube-api-access-9bgxj\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.104745 5108 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cec16d3f-7f30-4430-8908-77ebaf0a9f23-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.111994 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.115299 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215040 5108 scope.go:117] "RemoveContainer" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: E0202 00:30:03.215683 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": container with ID starting with 3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec not found: ID does not exist" containerID="3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215742 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec"} err="failed to get container status \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": rpc error: code = NotFound desc = could not find container \"3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec\": container with ID starting with 3f09e65382f240099cc0c0756e57e063c51612c7a26543556daf70b3e2ab5aec not found: ID does not exist" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.215775 5108 scope.go:117] "RemoveContainer" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: E0202 00:30:03.216057 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": container with ID starting with e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82 not found: ID does not exist" containerID="e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.216088 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82"} err="failed to get container status \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": rpc error: code = NotFound desc = could not find container \"e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82\": container with ID starting with e02f5543318f4ec46f0d7a5d721ed4f5f63756b12a1b86e280cc515281babf82 not found: ID does not exist" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.284200 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.286260 5108 status_manager.go:895] "Failed to get status for pod" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" pod="openshift-must-gather-gfw45/must-gather-74b7l" err="pods \"must-gather-74b7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-gfw45\": no relationship found between node 'crc' and this object" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309013 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.309334 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") pod \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\" (UID: \"8c3b7760-ff06-45a3-9609-e0ff773cc0f9\") " Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.311412 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.316082 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.322801 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr" (OuterVolumeSpecName: "kube-api-access-tn8fr") pod "8c3b7760-ff06-45a3-9609-e0ff773cc0f9" (UID: "8c3b7760-ff06-45a3-9609-e0ff773cc0f9"). InnerVolumeSpecName "kube-api-access-tn8fr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410625 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410659 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.410667 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tn8fr\" (UniqueName: \"kubernetes.io/projected/8c3b7760-ff06-45a3-9609-e0ff773cc0f9-kube-api-access-tn8fr\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:03 crc kubenswrapper[5108]: I0202 00:30:03.565148 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" path="/var/lib/kubelet/pods/cec16d3f-7f30-4430-8908-77ebaf0a9f23/volumes" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092473 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092463 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499870-qts6z" event={"ID":"8c3b7760-ff06-45a3-9609-e0ff773cc0f9","Type":"ContainerDied","Data":"87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12"} Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.092891 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d734ef0d66b16fe1a29a09a0669c45be62e20d94a396f8a49126e61bfbeb12" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.384610 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.424659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") pod \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\" (UID: \"b68f73b5-5a31-4952-b8ff-9a40c538dbb5\") " Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.431186 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8" (OuterVolumeSpecName: "kube-api-access-v5xc8") pod "b68f73b5-5a31-4952-b8ff-9a40c538dbb5" (UID: "b68f73b5-5a31-4952-b8ff-9a40c538dbb5"). InnerVolumeSpecName "kube-api-access-v5xc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:30:04 crc kubenswrapper[5108]: I0202 00:30:04.526739 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5xc8\" (UniqueName: \"kubernetes.io/projected/b68f73b5-5a31-4952-b8ff-9a40c538dbb5-kube-api-access-v5xc8\") on node \"crc\" DevicePath \"\"" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105074 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105085 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499870-ctgvw" event={"ID":"b68f73b5-5a31-4952-b8ff-9a40c538dbb5","Type":"ContainerDied","Data":"989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528"} Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.105628 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989ca1b15394eea8e5d33c3bbea2a3255c1634bd971ebc13b1468521068b2528" Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.434574 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.450828 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499864-pnc7n"] Feb 02 00:30:05 crc kubenswrapper[5108]: I0202 00:30:05.574781 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085299b1-a0db-40df-ab74-d8bf934d61bc" path="/var/lib/kubelet/pods/085299b1-a0db-40df-ab74-d8bf934d61bc/volumes" Feb 02 00:31:02 crc kubenswrapper[5108]: I0202 00:31:02.869375 5108 scope.go:117] "RemoveContainer" containerID="998e5f1fcc87712044852b3976957ba53e7f51bedc7d5c688980e4b72248f874" Feb 02 00:31:20 crc kubenswrapper[5108]: I0202 00:31:20.919383 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:31:20 crc kubenswrapper[5108]: I0202 00:31:20.920089 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:31:50 crc kubenswrapper[5108]: I0202 00:31:50.919746 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:31:50 crc kubenswrapper[5108]: I0202 00:31:50.920545 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.166528 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168815 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168844 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168883 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168896 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168960 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168975 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.168995 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169022 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169259 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="gather" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169278 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c3b7760-ff06-45a3-9609-e0ff773cc0f9" containerName="collect-profiles" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169302 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cec16d3f-7f30-4430-8908-77ebaf0a9f23" containerName="copy" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.169330 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b68f73b5-5a31-4952-b8ff-9a40c538dbb5" containerName="oc" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.176783 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.187281 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.204941 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219611 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219665 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lk82p\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.219991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.306762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.341779 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"auto-csr-approver-29499872-zk7j8\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.543383 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.895879 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29499872-zk7j8"] Feb 02 00:32:00 crc kubenswrapper[5108]: I0202 00:32:00.899263 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 00:32:01 crc kubenswrapper[5108]: I0202 00:32:01.298293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerStarted","Data":"d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376"} Feb 02 00:32:03 crc kubenswrapper[5108]: I0202 00:32:03.324214 5108 generic.go:358] "Generic (PLEG): container finished" podID="b4506d3f-997e-4dec-9101-f1ec1739a50f" containerID="35e1a3628fde542ef62f173467a4cb2b1959cb932bd354c8830c1dffb89265c0" exitCode=0 Feb 02 00:32:03 crc kubenswrapper[5108]: I0202 00:32:03.324860 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerDied","Data":"35e1a3628fde542ef62f173467a4cb2b1959cb932bd354c8830c1dffb89265c0"} Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.703076 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.808089 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") pod \"b4506d3f-997e-4dec-9101-f1ec1739a50f\" (UID: \"b4506d3f-997e-4dec-9101-f1ec1739a50f\") " Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.819011 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7" (OuterVolumeSpecName: "kube-api-access-nt2f7") pod "b4506d3f-997e-4dec-9101-f1ec1739a50f" (UID: "b4506d3f-997e-4dec-9101-f1ec1739a50f"). InnerVolumeSpecName "kube-api-access-nt2f7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 02 00:32:04 crc kubenswrapper[5108]: I0202 00:32:04.911505 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nt2f7\" (UniqueName: \"kubernetes.io/projected/b4506d3f-997e-4dec-9101-f1ec1739a50f-kube-api-access-nt2f7\") on node \"crc\" DevicePath \"\"" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387326 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" event={"ID":"b4506d3f-997e-4dec-9101-f1ec1739a50f","Type":"ContainerDied","Data":"d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376"} Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387375 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29de6f515db8ad61da3f51578d856cf4ac3ca0e6fa0e2f1d7692f04221cc376" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.387463 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29499872-zk7j8" Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.792435 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:32:05 crc kubenswrapper[5108]: I0202 00:32:05.802496 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29499866-p4952"] Feb 02 00:32:07 crc kubenswrapper[5108]: I0202 00:32:07.565615 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e42247-cef9-4651-977b-c8bf4f2a1265" path="/var/lib/kubelet/pods/11e42247-cef9-4651-977b-c8bf4f2a1265/volumes" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.919874 5108 patch_prober.go:28] interesting pod/machine-config-daemon-d74m7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.921035 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.921140 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.922691 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} pod="openshift-machine-config-operator/machine-config-daemon-d74m7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 00:32:20 crc kubenswrapper[5108]: I0202 00:32:20.922800 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" podUID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerName="machine-config-daemon" containerID="cri-o://194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf" gracePeriod=600 Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543073 5108 generic.go:358] "Generic (PLEG): container finished" podID="93334c92-cf5f-4978-b891-2b8e5ea35025" containerID="194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf" exitCode=0 Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543143 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerDied","Data":"194e3dbd97196d3de0be6ef1e30fef5712a8fc8c99966801283412ea58e86fdf"} Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d74m7" event={"ID":"93334c92-cf5f-4978-b891-2b8e5ea35025","Type":"ContainerStarted","Data":"559704552cc5e72ad853827ae38d3ed9ab7634f1f7995e20fd99aa218e41b467"} Feb 02 00:32:21 crc kubenswrapper[5108]: I0202 00:32:21.543818 5108 scope.go:117] "RemoveContainer" containerID="a7f95cff8111463a99c892cfb8cbabb5d9662714b7cb1113a5523aff294c5d87"